Time Value of Money, Real Options, Uncertainty & Risk in Technology Investment Decisions

“We have met the Enemy … and he is us”

is how the Kauffman Foundations starts their extensive report on investments in Venture Capital Funds and their abysmal poor performance over the last 20 years. Only 20 out of 200 Venture Funds generated returns that beat the public-market equivalent with more than 3%. 10 of those were Funds created prior to 1995. Clearly there is something rotten in the state of valuation, value creation and management. Is this state of affairs limited only to portfolio management (i..e, one might have hoped a better diversified VC portfolio) is this poor track record on investment decisions (even diversified portfolios) generic to any investment decision made in any business? I let smarter people answer this question. Though there is little doubt in my mind that the quote “We have met the Enemy … and he is us” could apply to most corporations and the VC results might not be that far away from any corporation’s internal investment portfolio. Most business models and business cases will be subject to wishful thinking and a whole artillery of other biases that will tend to overemphasize the positives and under-estimate (or ignore) the negatives.The avoidance of scenario thinking and reference class forecasting will tend to bias investments towards the upper boundaries and beyond of the achievable and ignore more attractive propositions that could be more valuable than the idea that is being pursued.

As I was going through my archive I stumbled over an old paper I wrote back in 2006 when I worked for T-Mobile International and Deutsche Telekom (a companion presentation due on Slideshare). At the time I was heavily engaged with Finance and Strategy in transforming Technology Investment Decision Making into a more economical responsible framework than had been the case previously. My paper was a call for more sophisticated approaches to technology investments decisions in the telecom sector as opposed to what was “standard practice” at the time and in my opinion pretty much still i.

Many who are involved in techno-economical & financial analysis as well as the decision makers acting upon recommendations from their analysts are in danger of basing their decisions on flawed economical analysis or simply have no appreciation of uncertainty and risk involved. A frequent mistake made in decision making of investment options is ignoring one of the most central themes of finance & economics, the Time-Value-of-Money. An investment decision taken was insensitive to the timing of the money flow. Furthermore, investment decisions based on Naïve TCO are good examples of such insensitivity bias and can lead to highly in-efficient decision making. Naïve here implies that time and timing does not matter in the analysis and subsequent decision.

Time-Value-of-Money:

I like to get my money today rather than tomorrow, but I don’t mind paying tomorrow rather than today”.

Time and timing matters when it comes to cash. Any investment decision that does not consider timing of expenses and/or income has a substantially higher likelihood of being an economical in-efficient decision. Costing the shareholders and investors (a lot of) money. As a side note Time-Value-of-Money assumes that you can actually do something with the cash today that is more valuable than waiting for it at a point in the future. Now that might work well for Homo Economicus but maybe not so for the majority of the human race (incl. Homo Financius).

Thus, if I am insensitive to timing of payments it does not matter for example whether I have to pay €110 Million more for a system the first year compared to deferring that increment to the 5th year

Clearly wrong!

In the above illustration outgoing cash flow (CF) example the naïve TCO (i..e, total cost of ownership) is similar for both CFs. I use the word naïve here to represent a non-discounted valuation framework. Both Blue and Orange CFs represent a naïve TCO value of €200 Million. So a decision maker (or an analyst) not considering time-value-of-money would be indifferent to one or the other cash flow scenario. Would the decision maker consider time-value-of-money (or in the above very obvious case see the timing of cash out) clear it would be in favor of Blue. Further front-loaded investment decisions are scary endeavors, particular for unproven technologies or business decisions with a high degree of future unknowns, as the exposure to risks and losses are so much higher than a more carefully designed cash-out/investment trajectory following the reduction of risk or increased growth. When only presented with the (naïve) TCO rather than the cash flows, it might even be that some scenarios might be unfavorable from a naïve TCO framework but favorable when time-value-of-money is considered. The following illustrates this;

The Orange CF above amounts to a naïve TCO of €180 Million versus to the Blue’s TCO of €200 Million. Clearly if all the decision maker is presented with is the two (naïve) TCOs, he can only choose the Orange scenario and “save” €20 Million. However, when time-value-of-money is considered the decision should clearly be for the Blue scenario that in terms of discounted cash flows yields €18 Million in its favor despite the TCO of €20 Million in favor of Orange. Obviously, the Blue scenario has many other advantages as opposed to Orange.

 

When does it make sense to invest in the future?

 

Frequently we are faced with  technology investment decisions that require spending incremental cash now for a feature or functionality that we might only need at some point in the future. We believe that the cash-out today is more efficient (i.e., better value) than introducing the feature/functionality at the time when we believe it might really be needed..

 

Example of the value of optionality: Assuming that you have two investment options and you need to provide management with which of those two are more favorable.

 

Product X with investment I1: provides support for 2 functionalities you need today and 1 that might be needed in the future (i.e., 3 Functionalities in total).

Product Y with investment I2: provides support for the 2 functionalities you need today and 3 functionalities that you might need in the future (i.e., 5 Functionalities in total).

 

I1 < I2 and = I2I1 > 0

 

If, in the future, we need more than 1 additional functionality it clearly make sense to ask whether it is better upfront to invest in Product Y, rather than X and then later Y (when needed). Particular when Product X would have to be de-commissioned when introducing Product Y, it is quite possible that investing in Product Y upfront is more favorable. 

 

From a naïve TCO perspective it clearly better to invest in Y than X + Y. The “naïve” analyst would claim that this saves us at least I1 (if he is really clever de-installation cost and write-offs might be included as well as saving or avoidance cost) by investing in Y upfront.

 

Of course if it should turn out that we do not need all the extra functionality that Product Y provides (within the useful life of Product X) then we have clearly made a mistake and over-invested byand would have been better off sticking to Product X (i.e., the reference is now between investing in Product Y versus Product X upfront).

 

Once we call upon an option, make an investment decision, other possibilities and alternatives are banished to the “land of lost opportunities”.

 

Considering time-value-of-money (i.e., discounted cash-flows) the math would still come out more favorable for Y than X+Y, though the incremental penalty would be lower as the future investment in Product Y would be later and the investment would be discounted back to Present Value.

 

So we should always upfront invest in the future?

 

Categorically no we should not!

 

Above we have identified 2 outcomes (though there are others as well);

Outcome 1: Product Y is not needed within lifetime T of Product X.

Outcome 2: Product Y is needed within lifetime T of Product X.

 

In our example, for Outcome 1 the NPV difference between Product X and Product Y is -10 Million US$. If we invest into Product Y and do not need all its functionality within the lifetime of Product X we would have “wasted” 10 Million US$ (i.e., opportunity cost) that could have been avoided by sticking to Product X.

 

The value of Outcome 2 is a bit more complicated as it depends on when Product Y is required within the lifetime of Product X. Let’s assume that Product X useful lifetime is 7 years, i.e., after which period we would need to replace Product X anyway requiring a modernization investment. We assume that for the first 2 years (i.e., yr 2 and yr 3) there is no need for the additional functionality that Product Y offers (or it would be obvious to deploy up-front at least within this examples economics). From Year 4 to Year 7 there is an increased likelihood of the functionalities of Product X to be required.

 

In Outcome 2 the blended NPV is 3.0 Million US$ positive to deploy Product X instead of Product Y and then later Product X (i.e., the X+Y scenario) when it is required. After the 7th year we would have to re-invest in a new product and the obviously looking beyond this timeline makes little sense in our simplified investment example.

 

Finally if we assess that there is a 40% chance that the Product Y will not be required within the life-time of Product X, we have the overall effective NPV of our options would be negative (i.e., 40%(-10) + 3 = –1 Million). Thus we conclude it is better to defer the investment in Product Y than to invest in it upfront. In other words it is economical more valuable to deploy Product X within this examples assumptions.

 

I could make an even stronger case for deferring investing in Product Y: (1) if I can re-use Product X when I introduce Product Y, (2) if I believe that the price of Product Y will be much lower in the future (i..e, due to maturity and competition), or (3) that there is a relative high likelihood that the Product Y might become obsolete before the additional functionalities are required (e.g., new superior products at lower cost compared to Product Y). The last point is often found when investing into the very first product releases (i.e., substantial immaturity) or highly innovative products just being introduced. Moreover, there might be lower-cost lower-tech options that could provide the same functionality when required that would make investing upfront in higher-tech higher-cost options un-economical. For example, a product that provide a single targeted functionality at the point in time it is needed, might be more economical than investing in a product supporting 5 functionalities (of which 3 is not required) long before it is really required.

 

Many business cases are narrowly focusing on proving a particular point of view. Typically maximum 2 scenarios are compared directly, the old way and the proposed way. No surprise! The new proposed way of doing things will be more favorable than the old (why else do the analysis;-). While such analysis cannot be claimed to be wrong, it poses the danger of ignoring more valuable options available (but ignored by the analyst). The value of optionality and timing is ignored in most business cases.

 

For many technology investment decisions time is more a friend than an enemy. Deferring investing into a promise of future functionality is frequently the better value-optimizing strategy.

 

Rules of my thumb:

  • If a functionality is likely to be required beyond 36 months, the better decision is to defer the investment to later.
  • Innovative products with no immediate use are better introduced later rather than sooner as improvement cycles and competition are going to make such more economical to introduce later (and we avoid obsolescence risk).
  • Right timing is better than being the first (e.g., as Apple has proven a couple of times).

Decision makers are frequently betting on a future event (i..e, knowingly or unknowingly) will happen and that making an incremental investment decision today is more valuable than deferring the decision to later. Basically we deal with an Option or a Choice. When we deal with a non-financial Option we will call such a Real Option. Analyzing Real Options can be complex. Many factors needs to be considered in order to form a reasonable judgment of whether investing today in a functionality that only later might be required makes sense or not;

  1. When will the functionality be required (i.e., the earliest, most-likely and the latest).
  2. Given the timing of when it is required, what is the likelihood that something cheaper and better will be available (i.e., price-erosion, product competition, product development, etc..).
  3. Solutions obsolescence risks.

As there are various uncertain elements involved in whether or not to invest in a Real Option the analysis cannot be treated as a normal deterministic discounted cash flow. The probabilistic nature of the decision analysis needs to be correctly reflected in the analysis.

 

Most business models & cases are deterministic despite the probabilistic (i.e., uncertain and risky) nature they aim to address.

 

Most business models & cases are 1-dimensional in the sense of only considering what the analyst tries to prove and not per se alternative options.

 

My 2006 paper deals with such decisions and how to analyze them systematically and provide a richer and hopefully better framework for decision making subject to uncertainty (i.e., a fairly high proportion of investment decisions within technology).

Enjoy !

ABSTRACT

The typical business case analysis, based on discounted cash flows (DCF) and net-present valuation (NPV), inherently assumes that the future is known and that regardless of future events the business will follow the strategy laid down in the present. It is obvious that the future is not deterministic but highly probabilistic, and that, depending on events, a company’s strategy will be adopted to achieve maximum value out of its operation. It is important for a company to manage its investment portfolio actively and understand which strategic options generate the highest return on investment. In every technology decision our industry is faced with various embedded options, which needs to be considered together with the ever-prevalent uncertainty and risk of the real world. It is often overlooked that uncertainty creates a wealth of opportunities if the risk can be managed by mitigation and hedging. An important result concerning options is that the higher the uncertainty of the underlying asset, the more valuable could the related option become. This paper will provide the background for conventional project valuation, such as DCF and NPV. Moreover, it will be shown how a deterministic (i.e., conventional) business case easily can be made probabilistic, and what additional information can be gained with simulating the private as well as market-related uncertainties. Finally, real options analysis (ROA) will be presented as a natural extension of the conventional net-present value analysis. This paper will provide several examples of options in technology, such as radio access site-rollout strategies, product development options, and platform architectural choices.

INTRODUCTION

In technology, as well as in mainstream finance, business decisions are more often than not based on discounted cash flow (DCF) calculations using net-present value (NPV) as decision rationale for initiating substantial investments. Irrespective of the complexity and multitudes of assumptions made in business modeling the decision is represented by one single figure, the net present value. The NPV basically takes the future cash flows and discount these back to the present, assuming a so-called “risk –adjusted” discount rate. In most conventional analysis the “risk-adjusted” rate is chosen rather arbitrarily (e.g., 10%-25%) and is assumed to represent all project uncertainties and risks.The risk-adjusted rate should always as a good practice be compared with the weighted average cost of capital (WACC) and benchmarked against what Capital Asset Pricing Model (CAPM) would yield. Though in general the base rate will be set by your finance department and not per se something the analyst needs to worry too much about. Suffice to say that I am not a believer that all risk can be accounted for in the discount rate and that including risks/uncertainty into the cash flow model is essential.

 

It is naïve to believe that the applied discount rate can account for all risk a project may face.

 

In many respects the conventional valuation can be seen as supporting a one-dimensional decision process. DCF and NPV methodologies are commonly accepted in our industry and the finance community [1]. However, there is a lack of understanding of how uncertainty and risk, which is part of our business, impacts the methodology in use. The bulk of business cases and plans are deterministic by design. It would be far more appropriate to work with probabilistic business models reflecting uncertainty and risk. A probabilistic business model, in the hands of the true practitioner, provides considerable insight useful for steering strategic investment initiatives. It is essential that a proper balance is found between model complexity and result transparency. With available tools, such as Palisade Corporation’s @RISK Microsoft Excel add-in software [2], it is very easy to convert a conventional business case into a probabilistic model. The Analyst would need to converse with subject-matter experts in order to provide a reasonable representation of relevant uncertainties, statistical distributions, and their ranges in the probabilistic business model [3].

 

In this paper the word Uncertainty will be used as representing the stochastic (i.e., random) nature of the environment. Uncertainty as concept represents events and external factors, which cannot be directly controlled. The word Volatility will be used interchangeably with uncertainty. With Risk is meant the exposure to uncertainty, e.g., uncertain cash-flows resulting in out-of-money and catastrophic business failure. The total risk is determined by the collection of uncertain events and Management’s ability to deal with these uncertainties through mitigation and “luck”. Moreover, the words Option and Choice will also be used interchangeably throughout this paper.

 

Luck is something that never should be underestimated.

 

While working on the T-Mobile NL business case for the implementation of Wireless Application Protocol (WAP) for circuit switched data (CSD), a case was presented showing a 10% chance of losing money (over a 3 year period). The business case also showed an expected NPV of €10 Million, as well as a 10% chance of making more than €20 Million over a 3 year period. The spread in the NPV, due to identified uncertainties, were graphically visualized.

 

Management, however, requested only to be presented with the “normal” business case NPV as this “was what they could make a decision upon”. It is worthwhile to understand that the presenters made the mistake to make the presentation to Management too probabilistic and mathematical which in retrospect was a wrong approach [4]. Furthermore, as WAP was seen as something strategically important for long-term business survival, moving towards mobile data, it is not conceivable that Management would have turned down WAP even if the business case had been negative.

In retrospect, the WAP business case would have been more useful if it had pointed out the value of the embedded options inherent in the project;

  1. Defer/delay until market conditions became more certain.
  2. Defer/delay until GPRS became available.
  3. Outsource service with option to in-source or terminate depending on market conditions and service uptake.
  4. Defer/delay until technology becomes more mature, etc..

Financial “wisdom” states that business decisions should be made which targets the creation of value [5]. It is widely accepted that given a positive NPV, monetary value will be created for the company therefore projects with positive NPV should be implemented. Most companies’ investment means are limited. Innovative companies often are in a situation with more funding demand than available. It is therefore reasonable that projects targeting superior NPVs should be chosen. Considering the importance and weight businesses associate with the conventional analysis using DCF and NPV it worthwhile summarizing the key assumptions underlying decisions made using NPV: 

  • As a Decision is made, future cash flow streams are assumed fixed. There is no flexibility as soon as a decision has been made, and the project will be “passively” managed.
  • Cash-flow uncertainty is not considered, other than working with a risk-adjusted discount rate. The discount rate is often arbitrarily chosen (between 9%-25%) reflecting the analyst’s subjective perception of risk (and uncertainty) with the logic being the higher the discount rate the higher the anticipated risk (note: the applied rate should be reasonably consistent with Weighted Average Cost of Capital  and Capital Asset Pricing Model (CAPM)).
  • All risks are completely accounted for in the discount rate (i.e., which is naïve)
  • The discount rate remains constant over the life-time of the project (i.e., which is naïve).
  • There is no consideration of the value of flexibility, choices and different options.
  • Strategic value is rarely incorporated into the analysis. It is well known that many important benefits are difficult (but not impossible) to value in a quantifiable sense, such as intangible assets or strategic positions. If a strategy cannot be valued or quantified it should not be pursued.
  • Different project outcomes and the associated expected NPVs are rarely considered.
  • Cash-flows and investments are discounted with a single discount rate assuming that market risk and private (company) risk is identical. Correct accounting should use the risk-free rate for private risk and cash-flows subject to market risks should make use of market risk-adjusted discount rate.

In the following several valuation methodologies will be introduced, which build upon and extend the conventional discounted cash flow and net-present value analysis, providing more powerful means for decision and strategic thinking.

 

TRADITIONAL VALUATION

The net-present value is defined as the difference between the values assigned to a given asset, the cash-flows, and the cost and capital expenditures of operating the asset. The traditional valuation approach is based on the net-present value (NPV) formulation [6]

T is the period during which the valuation is considered, Ct is the future cash flow at time t, rram is the risk-adjusted discount rate applied to market-related risk, It is the investment cost at time t, and rrap is the risk-adjusted-discount rate applied to private-related risk. In most analysis it is customary to assume the same discount rate for private as well as market risk as it simplifies the valuation analysis. The “effective” discount rate r* is often arbitrarily chosen. The I0 is the initial investment at time t=0, and Ct* = Ct – It (for t>0) is the difference between future cash flows and investment costs. The approximation (i.e., ≈ sign) only holds in the limit where the rate rrap is close to rram. The private risk-adjusted rate is expected to be lower than the market risk-adjusted rate. Therefore, any future investments and operating costs will weight more than the future cash flows. Eventually value will be destroyed unless value growth can be achieved. It is therefore important to manage incurred cost, and at the same time explore growth aggressively (at minimum cost) over the project period. Assuming a risk-adjusted or effective rate for both market and private risk investment, cost and cash-flows could lead to a even serious over-estimation of a given project’s value. In general, the private risk-adjusted rate rrap would be between the risk-free rate and the market risk-adjusted discount rate rram.

 

EXAMPLE 1: An initial network investment of 20 mio euro needs to be committed to provide a new service for the customer base. It is assumed that sustenance investment per year amounts to 2% of the initial investment and that operations & maintenance is 20% of the accumulated investment (50% in initial year). Other network cost, such as transmission (assumes centralized platform solution) increases with 10% per year due to increased traffic with an initial cost of 150 thousand. The total network investment and cost structure should be discounted according with the risk-free rate (assumed to be 5%). Market assumptions: s-curve consistent growth assumed with a saturation of 5 Million service users after approximately 3 years. It has been assumed that the user pays 0.8 euro per month for the service and that the service price decreases with 10% per year. Cost of acquisition assumed to be 1 euro per customer, increasing with 5% per year. Other market dependent cost assumed initially to be 400 thousand and to increase with 10% per year. It is assumed that the project is terminated after 5 years and that the terminal value amounts to 0 euro. PV stands for present value and FV for future value. The PV has been discounted back to year 0. It can be seen from the table that the project breaks-even after 3 years. The first analysis presents the NPV results (over a 5 year period) when differentiating between private (private risk-adjusted rate) and market (market risk-adjusted rate) risk taking, a positive NPV of 26M is found. This should be compared with the standard approach assuming an effective rate of 12.5%, which (not surprisingly) results in a positive NPV of 46M. The difference between the two approaches amounts to about 19M.

.

Example above compares the approach of using an effective discount rate r* with an analysis that differentiates between private rrap and market risk rram in the NPV calculation. The example illustrates a project valuation example of introducing a new service. The introduction results in network investments and costs in order to provide and operate the service.  Future cash-flows arise from growth of customer base (i.e., service users), and is offset by market related costs. All network investments and costs are assumed to be subject to private risk and should be discounted with the risk-free rate. The market-related cost and revenues are subject to market risk and the risk-adjusted rate should be used [7]. Alternatively, all investment, costs and revenues can be treated with an effective discount rate. As seen from the example, the difference between the two valuation approaches can be substantial:

  • NPV = €26M for differentiated market and private risk, and
  • NPV = €46M using an effective discount rate (e.g., difference of €20M assuming the following discount rates rram = 20%, rrap =5%, r* = 12.5%). Obviously, as rram –> r* and rrap –> r* , the difference in the two valuation approaches will tend to zero. 

 

UNCERTAINTY, RISK & VALUATION

The traditional valuation methodology presented in the previous section makes no attempt to incorporate uncertainties and risk other than the effective discount-rate r* or risk-adjusted rates rram/rap. It is inherent in the analysis that cash-flows, as well as the future investments and cost structure, are assumed to be certain. The first level of incorporating uncertainty into the investment analysis would be to define market scenarios with an estimated (subjective) chance of occurring. A good introduction to uncertainty and risk modeling is provided in the well-written book by D. Vose [8], S.O. Sugiyama’s training notes [3] and S. Beninga’s “Financial Modeling” [7].

 

The Business Analyst working on the service introduction, presented in Example 1, assesses that there are 3 main NPV outcomes for the business model; NPV1= 45, NPV2= 20 and NPV3= -30.  The outcomes have been based on 3 different market assumptions related to customer uptake: 1. Optimistic, 2. Base and 3. Pessimistic. The NPVs are associated with the following chances of occurrence: P1 = 25%, P2 = 50% and P3 = 25%.

 

What would the expected net-present value be given the above scenarios?

 

The expected NPV (ENPV) would be ENPV=P1×NPV1+ P2×NPV2+ P3×NPV3=25%×45+50%×20+25%×(-30) =14. Example 2 (below) illustrates the process of obtaining the expected NPV.

Example 2: illustrates how to calculate the expected NPV (ENPV) when 3 NPV outcomes have been identified resulting from 3 different customer uptake scenarios. The expected NPV calculation assumes that we do not have any flexibility to avoid any of the 3 outcomes. The circular node represents a chance node yielding the expected outcome given the weighted NPVs.

 

In general the expected NPV can be written as

,where N is number of possible NPV outcomes, NPVi is the net present value of the ith outcome and Pi is the chance that the ith outcome will occur.  By including scenarios in the valuation analysis, the uncertainty of the real world is being captured. The risk of overestimating or underestimating a project valuation is thereby minimized. Typically, the estimation of P, which is the chance or probability, for a particular outcome is based on subjective “feeling” of the Business Analyst, who obviously still need to build a credible story around his choices of likelihood for the scenarios in questions. Clearly this is not a very satisfactory situation as all kind of heuristic biases are likely to influence the choice of a given scenarios likelihood. Still it is clearly more realistic than a purely deterministic approach with only one locked-in happening.

 

Example 3 shows various market outcomes used to study the uncertainty of market conditions upon the net-present value of Example 1and the project valuation subject these uncertainties. The curve represented by the thick solid line and open squares is the base market scenario used in Example 1, while the other curves represent variations to the base case.  Various uncertainties of the customer growth have been explored. An s-curve (logistic function) approach has been used to model the customer uptake of for the studied service: , t is time period, Smax is the maximum expected number of customer, be determines the slope in the growth phase, and (1/a) is the years to reach the mid-point of the S-curve. The function models the possible decline in customer base, with c being the rate of decline in the market share, and td the period when the decline sets in. Smax has been varied between 2.5 and 6.25 Million customers, with an average of 5.0 Million, b was chosen to be 50 (arbitrarily), (1/a) was varied between 1/3 and 2 (year), with a mean of 0.5 (year). In modeling the market decline, the rate of decline c was varied between 0% and 25% years, with a chosen mean value of 10%, and the td was varied between 0 and 3 years with a mean of 2 years before market decline starts. In all cases a so-called pert distribution was used to model the parameter variance. Instead of running a limited number of scenarios as shown in Example 2 (3 outcomes), a Monte Carlo (MC) simulation is carried out sampling several thousands of possible outcomes.

 

As already discussed a valuation analysis often involves many uncertain variables and assumptions. In the above Example 3 different NPV scenarios had been identified, which resulted from studying the customer uptake. Typically, the identified uncertain input variables in a simplified scenario-sensitivity approach would each have at least three possible values; minimum (x), base-line or most-likely (y), and maximum (z). For every uncertain input variable the Analyst has identified a variation, i.e., 3 possible variations. For an analysis with 2 uncertain input variables, each with variation, it is not difficult to show that the outcome is 9 different scenario-combinations, for 3 uncertain input variables the result is 72 scenario-combinations, 4 uncertain input variables results in 479 different scenario permutations, and so forth. In complex models containing 10 or more uncertain input variables, the number of combinations would have exceeded 30 Million permutations [9]. Clearly, if 1 or 2 uncertain input variables have been identified in a model the above presented scenario-sensitivity approach is practical. However, the range of possibilities quickly becomes very large and the simple analysis breaks down. In these situations the Business Analyst should turn to Monte Carlo [10] simulations, where a great number of outcomes and combinations can be sampled in a probabilistic manner and enables proper statistical analysis. Before the Analyst can perform an actual Monte Carlo simulation, a probability density function (pdf) needs to be assigned to each identified uncertain input variable and any correlation between model variables needs to be addressed. It should be emphasized that with the help of subject-matter experts, an experienced Analyst in most cases can identify the proper pdf to use for each uncertain input variable. A tool such as Palisade Corporation’s @RISK toolbox [2] for MS Excel visualizes, supports and greatly simplifies the process of including uncertainty into a deterministic model, and efficiently performs Monte Carlo simulations in Microsoft Excel.

 

Rather than guessing a given scenarios likelihood, it is preferable to transform the deterministic scenarios into one probabilistic scenario. Substituting important scalars (or drivers) with best practice probability distributions and introduce logical switches that mimic choices or options inherent in different driver outcomes. Statistical sampling across simulated outcomes will provide an effective (or blended) real option value.

 

In Example 1a standard deterministic valuation analysis was performed for a new service and the corresponding network investments. The inherent assumption was that all future cash-flows as well as cost-structures were known. The analysis yielded a 5-year NPV of 26 mio (using the market-private discount rates). This can be regarded as a pure deterministic outcome. The Business Analyst is requested by Management to study the impact on the project valuation incorporating uncertainties into the business model. Thus, the deterministic business model should be translated into a probabilistic model. It is quickly identified that the market assumptions, the customer intake, is an area which needs more analysis. Example 3shows various possible market outcomes. The reference market model is represented by the thick-solid line and open squares. The market outcome is linked to the business model (cash-flows, cost and net-present value). The deterministic model in Example 1 has now been transformed into a probabilistic model including market uncertainty.

Example 4: shows the impact of uncertainty in the marketing forecast of customer growth on the Net Present Value (extending Example 1). A Monte Carlo (MC) simulation was carried out subject to the variations of the market conditions (framed box with MC in right side) described above (Example 2) and the NPV results were sampled. As can be seen in the figure above an expected mean NPV of 22M was found with a standard deviation of 16M. Further, analysis reveals a 10% probability of loss (i.e., NPV £ 0 euro) and an opportunity of up to 46M. Charts below (Example 4b and 4c) show the NPV probability density function and integral (probability), respectively. 

Example 4b                                                                        Example 4c

Example 4 above summarizes the result of carrying out a Monte Carlo (MC) simulation, using @RISK [2], determining the risks and opportunities of the proposed service and therefore obtaining a better foundation for decision making. In the previous examples the net-present value was represented as a single number; €26M in Example 1 and an expect NPV of €14M in Example 2. In Example 4, the NPV is far richer (see the probability charts of NPV at the bottom of the page) – first note that the mean NPV of €22M agree well with Example 1. Moreover, the Monte Carlo analysis shows the project down-side, that there is a 10% chance of ending up with a poor investment, resulting in value destruction. The opportunity or upside is a chance (i.e., 5%) of gaining more than €46M within a 5-year time-horizon. The project risk profile is represented with the NPV standard deviation, i.e. the project volatility, of €16M. It is Management’s responsibility to weight the risk, downside as well as upside, and ensure that proper mitigation will be considered to reduce the impact of the project downside and potential value destruction.

 

The presented valuation methodologies so far do not consider flexibility in decision making. Once an investment decision has been taken investment management is assumed to be passive. Thus, should a project turn out to destroy value, which is inevitable if revenue growth becomes limited compared to the operating cost, Management is assumed not to terminate or abandon this project. In reality active Investment Management and Management Decision Making does consider options and their economical and strategic value. In the following a detailed discussion on the valuation of options and the impact on decision making are presented. The Real options analysis (ROA) will be introduced as a natural extension of probabilistic cash flow and net present value analysis. It should be emphasized that ROA is based on some advanced mathematical, as well as statistical concepts, which will not be addressed in this work.

However, it is possible to get started on ROA with proper re-arrangement of the conventional valuation analysis, as well as incorporating uncertainty where ever appropriate. In the following the goal is to get the reader introduced to thinking about the value of options.

 

REAL OPTIONS & VALUATION

An investment option can be seen as a decision flexibility, which depending upon uncertain conditions, might be realized. It should be emphasized, that as with a financial option, it is at the investor’s discretion to realize an option. Any cost or investment for the option itself can be viewed as the premium a company has to pay in order to obtain the option. For example, a company could be looking at an initial technology investment, with the option later on to expand should market conditions be favorable for value growth. Exercising the option, or making the decision to expand the capacity, results in a commitment of additional cost and capital investments – the “strike price” – into realizing the plan/option. Once the option to expand has been exercised, the expected revenue stream becomes the additional value subject to private and market risks. In every technology decision a decision-maker is faced with various options and would need to consider the ever-prevalent uncertainty and risk of real-world decisions.

 

In the following example, a multinational company is valuing a new service with the idea to commercially launch in all its operations. The cash-flows, associated with the service, are regarded as highly uncertain, and involve significant upfront development cost and investments in infrastructure to support the service. The company studying the service is faced with several options for the initial investment as well as future development of the service. Firstly, the company needs to make the decision to launch the service in all countries in which it is based, or to start-up in one or a few countries to test the service idea before committing to a full international deployment, investing in transport and service capacity. The company also needs to evaluate the architectural options in terms of platform centralization versus de-centralization, platform supplier harmonization or commit to a more-than-one-supplier strategy. In the following, options will be discussed in relation to the service deployment as well as the platform deployment, which supports the new service. In the first instance the Marketing strategy defines a base-line scenario in which the service is launched in all its operations at the same time. The base-line architectural choice is represented by a centralized platform scenario placed in one country, providing the service and initial capacity to the whole group.

.

Platform centralization provides for an efficient investment and resourcing; instead of several national platform implementation projects only one country focuses its resources. However, the operating costs might be higher due to need for international leased transmission connectivity to the centralized platform. Due to the uncertainty in the assumed cash-flows, arising from market uncertainties, the following strategy has been identified; The service will be launched initially in a limited number of operations (one or two) with the option to expand should the service be successful (option 1), or should the service fail to generate revenue and growth potential an option to abandon the service after 2 years (option 2). The valuation of the identified options should be assessed in comparison with the base-line scenario of launching the service in all operations. It is clear that the expansion option (option 1) leads to a range of options in terms of platform expansion strategies depending on the traffic volume and cost of the leased international transmission (carrying the traffic) to the centralized platform.

 

For example, if the cost of transmission exceeds the cost of operating the service platform locally an option to locally deploy the service platform is created. From this example it can be seen that by breaking up the investment decisions into strategic options the company has ensured that it can abandon should the service fail to generate the expected revenue or cash-flows, reducing loses and destruction of wealth. However, more importantly the company, while protecting itself from the downside, has left open the option to expand at the cost of the initial investment. It is evident that as the new service has been launched and cash-flows start being generated (or lack of appropriate cash-flows) the company gains more certainty and better grounds for making decisions on which strategic options should be exercised.

 

In the previous example, an investment and its associated valuation could be related to the choices which come naturally out of the collection of uncertainties and the resulting risk. In the literature (e.g., [11], [12]) it has been shown that conventional cash-flow analysis, which omits option valuation, tends to under-estimate the project value [13]. The additional project value results from identifying inherent options and valuing these options separately as strategic choices that can be made in a given time-horizon relevant to the project. The consideration of the value of options in the physical world closely relates to financial options theory and treatment of financial securities [14]. The financial options analysis relates to the valuation of derivatives [15] depending on financial assets, whereas the analysis described above identifying options related to physical or real assets, such as investment in tangible projects, is defined as real options analysis (ROA). Real options analysis is a fairly new development in project valuation (see [16], [17], [18], [19], [20], and [21]), and has been adopted to gain a better understanding of the value of flexibility of choice.

 

One of the most important ideas about options in general and real options in particular, is that uncertainty widens the range of potential outcomes. By proper mitigation and contingency strategy the downside of uncertainty can be significantly reduced, leaving the upside potential. Uncertainty, often feared by Management, can be very valuable, provided the right level of mitigation is exercised. In our industry most committed investments involve a high degree of uncertainty, in particular concerning market forces and revenue expectations, but also technology-related uncertainty and risk is not negligible. The value of an option, or strategic choice, arises from the uncertainty and related risk that real-world projects will be facing during their life-time. The uncertain world, as well as project complexity, results in a portfolio of options, or choice-path, a company can choose from. It has been shown that such options can add significant value to a project – however, presently options are often ignored or valued incorrectly [1121]. In projects, which are inherently uncertain, the Analyst would look for project-valuable options such as, for example:

  1. Defer/Delay – wait and see strategy (call option)
  2. Future growth/ Expand/Extend – resource and capacity expansion (call option)
  3. Replacement – technology obsolescence/end-of-life issues (call option)
  4. Introduction of new technology, service and/or product (call option)
  5. Contraction – capacity decommissioning (put option)
  6. Terminate/abandon – poor cash-flow contribution or market obsolescence (put option)
  7. Switching options – dynamic/real-time decision flexibility (call/put option)
  8. Compound options – phased and sequential investment (call/put option)

It is instructive to consider a number of examples of options/flexibilities which are representative for the mobile telecommunications industry. Real options or options on physical assets can be divided in to two basic types – calls and puts. A call option gives, the holder of the option, the right to buy an asset, and a put option provides the holder with the right to sell the underlying asset.

 

First, the call option will be illustrated with a few examples: One of the most important options open to management is the option to Defer or Delay (1) a project. This is a call option, right to buy, on the value of the project. The defer/delay option will be addressed at length later in this paper. The choice to Expand (2) is an option to invest in additional capacity and increase the offered output if conditions are favorable. This is defined as a call option, i.e., the right to buy or invest, on the value of the additional capacity that could enable extra customers, minutes-of-use, and of course additional revenue. The exercise price of the call option is the investment and additional cost of providing the additional capacity discounted to the time of the option exercise. A good example is the expansion of a mobile switching infrastructure to accommodate an increase in the customer base. Another example of expansion could be moving from platform centralization to de-centralization as traffic grows and the cost of centralization becomes higher than the cost of decentralizing a platform. For example, the cost of transporting traffic to a centralized platform location could, depending on cost-structure and traffic volume, become un-economical. Moreover, Management is often faced with the option to extend the life of an asset by re-investing in renewal – this choice is a so-called Replacement Option (3). This is a call option, the right to re-invest, on the assets future value. An example could be the renewal of the GSM base-transceiver stations (BTS), which would extend the life and adding additional revenue streams in the form of options to offer new services and products not possible on the older equipment. Furthermore, there might be additional value in reducing operational cost of old equipment, which typically would have higher running cost, than with new equipment. Terminate/Abandonment (5) in a project is an option to either sell or terminate a project. It is a so-called put option, i.e., it gives the holder the right to sell, on the projects value. The strike price would be the termination value of the project reduced by any closing-down costs.  This option mitigates the impact of a poor investment outcome and increases the valuation of the project. A concrete example could be the option to terminate poorly revenue generating services or products, or abandon a technology where the operating costs results in value destruction. The growth in cash-flows cannot compensate the operating costs. Contraction choices  (6) are options to reduce the scale of a project’s operation. This is a put option, right to “sell”, on the value of the lost capacity. The exercise price is the present value of future cost and investments saved as seen at the time of exercising the option. In reality most real investment projects can be broken up in several phases and therefore also will consist of several options and the proper investment and decision strategy will depend on the combination these options. Phased or sequential investment strategies often include Compounded Options (8), which are a series of options arising sequentially.

 

The radio access network site-rollout investment strategy is a good example of how compounded options analysis could be applied. The site rollout process can be broken out in (at least) 4 phases: 1. Site identification, 2. Site acquisition, 3. Site preparation (site build/civil work), and finally 4. Equipment installation, commissioning and network integration. Phase 2 depends on phase 1, phase 3 depends on phase 2, and phase 4 depends on phase 3 – a sequence of investment decisions depending on the previous decision, thus the anatomy of the real options is that of Compound Options (8) . Assuming that a given site location has been identified and acquired (call option on the site lease), which is typically the time-consuming and difficult part of the overall rollout process; the option to prepare the site emerges (Phase 3). This option, also a call option, could depend on the market expectations and the competitions strategy, local regulations and site-lease contract clauses. The flexibility arises from deferring/delaying the decision to commit investment to site preparation. The decision or option time-horizon for this deferral/delay option is typically set by the lease contract and its conditions. If the option expires the lease costs have been lost, but the value arises from not investing in a project that would result in negative cash-flow.  As market conditions for the rollout technology becomes more certain, higher confidence in revenue prospects, a decision to move to site preparation (Phase 3) can be made. In terms of investment management after Phase 3 has been completed there is little reason not to pursue Phase 4 and install and integrate the equipment enabling service coverage around the site location. If at the point of Phase 3 the technology or supplier choice still remains uncertain it might be a valuable option to await (deferral/delay option) a decision on supplier and/or technology to be deployed. In the site-rollout example described other options can be identified, such as abandon/terminate option on the lease contract (i.e., a put option). After Phase 4 has been completed there might come a day where an option to replace the existing equipment with new and more efficient / economical equipment arises.  It might even be interesting to consider the option value of terminating the site altogether and de-install the equipment. This could happen when operating costs exceeds the cash-flow. It should be noted that the termination option is quite dramatic with respect to site-rollout as this decision would disrupt network coverage and could aggress existing customers. However, the option to replace the older technology and maybe un-economical services with a new and more economical technology-service option might prove valuable. Most options are driven by various sources of uncertainty. In the site-rollout example, uncertainty might be found with respect to site-lease cost, time-to-secure-site, inflation (impacting the site-build cost), competition, site supply and demand, market uncertainties, and so forth

 

Going back to Example 1 and Example 4, the platform subject-matter expert (often different from the Analyst) has identified that if the customer base exceeds 4 Million customers and expansion of €10M will be needed. Thus, the previous examples underestimate the potential investments in platform expansion due to customer growth. Given that the base-line market scenario does identify that that this would be the case in the 2nd year of the project the €10M is included in the deterministic conventional business case for the new service. The result of including the €10M in the 2nd year of Example 1 is that the NPV drops from €26M to €8.7M (∆NPV minus €17.6M). Obviously, the conventional Analyst would stop here and still be satisfied that this seems to be a good and solid business case. The approach of Example 4 is applied to the new situation, subject to the same market uncertainty given in Example 3. From the Monte Carlo simulation, it is found that the NPV mean-value only is €4.7M. However, the downside is that the probability of loss (i.e., an NPV less than 0) now is 38%. It is important to realize that in both examples is the assumption that there is no choice or flexibility concerning the €10M investment; the investment will be committed in year two. However, the project has an option – the option to expand provided that the customer base exceeds 4 Million customers. Time wise it is a flexible option in the sense that if the project expected lifetime is 5 years, any time within this time-horizon is there a possibility that the customer base exceeds the critical mass for platform expansion.

Example 5: Shows the NPV valuation outcome when an option to expand is included in the model of Example 4. The €10M  is added if and only if the customer base exceeds 4 Million.

In the above Example 5  the probabilistic model has been changed to add €10M if and only if the customer base exceeds 4 Million. Basically, the option of expansion is being simulated. Treating the expansion as an option is clearly valuable for the business case, as the NPV mean-value has increased from €4.7M to €7.6M. In principle the option value could be taken to €2.9M. It is worthwhile noticing that the probability of loss (from 38% to 25%) has also been reduced by allowing for the option not to expand the platform if the customer base target is not achieved. It should be noted that although the example does illustrate the idea of options and flexibility it is not completely in line with a proper real options analysis.

Example 6 Shows the different valuation outcomes depending on whether the €10M platform expansion (when customer base exceeds 4 Million) is considered as un-avoidable (i.e., the “Deterministic No Option” and “Probabilistic No Option”) or as an option or choice to do so (“Probabilistic with Option”). It should be noted that the additional €3M in difference between “Probabilistic No Option” and “Probabilistic With Option” can be regarded as an effective option value, but it does not necessarily agree with a proper real-option valuation analysis of the option to expand. Another difference in the two probabilistic models is that in the model with option to expand an expansion can happen any year if customer base exceeds 4 Million, while the No option model only considers the expansion in year 2 where according with the marketing forecast the base exceeds the 4 Million. Note that Example 6 is different in assumptions than Example 1 and Example 4 as these do not include the additional €10M.

 

Example 6 above summarizes the three different approaches of valuation analysis; deterministic (essential 1-dimensional), probabilistic with options, and probabilistic including value options.

The investment analysis of real options as presented in this paper is not a revolution but rather an evolution of the conventional cash-flow and NPV analysis. The approach to valuation is first to understand and proper model the base-line case. After the conventional analysis has been carried out, the analyst, together with subject-matter experts, should determine areas of uncertainty by identifying the most relevant uncertain input parameters and their variation-ranges. As described in the previous section the deterministic business model is being transformed into a probabilistic model. The valuation range, or NPV probability distribution, is obtained by Monte Carlo simulations and the opportunity and risk profile is analyzed. The NPV opportunity-risk profile will identify the need for mitigation strategies, which in itself result in studying the various options inherent in the project. The next step in the valuation analysis is to value the identified project or real options. The qualitatively importance of considering real options in investment decisions has been provided in this paper. It has been shown that conventional investment analysis, represented by net-present value and discounted cash-flow analysis, gives only one side of the valuation analysis. As uncertainty is the “farther” of opportunity and risk it needs to be considered in the valuation process. Are identified options always valuable? The answer to that question is no – if we have certainty about an option movement is not in our favor then the option would be valuable. Think for example of considering a growth option at the onset of severe recession.

 

The real options analysis is often presented as being difficult and too mathematical; in particular

due to the involvement of the partial differential equations (PDE) that describes the underlying uncertainty (continuous-time stochastic processes, involvement of Markov processes, diffusion processes, and so forth). Studying PDEs are the basis for the ground-breaking work of the Black-Scholes-Merton [22] [23] on option pricing, which provided the financial community with an analytical expression for valuing financial options. However, “heavy” mathematical analysis is not really needed for getting started on real option.

 

Real options are a way of thinking, identifying valuable options in a project or potential investment that could create even more value by considering as an option instead of a deterministic given.

 

Furthermore, Cox et al [24] proposed a simplified algebraic approach, which involves so-called binominal trees representing price, cash-flow, or value movements in time. The binomial approach is very easy to understand and implement, resembling standard decision tree analysis, and visually easy to generate, as well as algebraically straightforward to solve.

 

SUMMARY

Real options are everywhere where uncertainty governs investment decisions. It should be clear that uncertainty can be turned into a great advantage for value growth providing proper contingencies are taken for reducing the downside of uncertainty – mitigating risk.  Very few investment decisions are static, as conventional discounted cash-flow analysis otherwise might indicate, but are ever changing due to changes in market conditions (global as well as local), technologies, cultural trends, etc. In order to continue to create wealth and value for the company value growth is needed and should force a dynamic investment management process that continuously looks at the existing as well as future valuable options available for the industry. It is compelling to say that a company’s value should be related to its real-options portfolio, and its track record in mitigating risk, and achieving the uncertain up-side of opportunities.

 

ACKNOWEDGEMENT

I am indebted to Sam Sugiyama (President & Founder of EC Risk USA & Europe) for taking time out from a very busy schedule and having a detailed look at the content of our paper. His insights and hard questions have greatly enriched this work. Moreover, I would also like to thank Maurice Ketel (Manager Network Economics), Jim Burke (who in 2006 was Head of T-Mobile Technology Office) and Norbert Matthes (who in 2007 was Head of Network Economics T-Mobile Deutschland) for their interest and very valuable comments and suggestions.

___________________________

APPENDIX – MATHEMATICS OF VALUE.

Firstly we note that the Future Value FV (of money) can be defined as the present Value PV (of money) times a relative increase given by an effective rate r* (i.e., that represents the change of money value between time periods), reflecting value increase or of course decrease over a cause of time t;

  

So the Present Value given we know the Future Value would be

For a sequence (or series) of future money flow we can write the present value as 

If r* is positive time-value-of-money follows naturally, i.e., money received in the future is worth less than today. It is a fundamental assumption that you can create more value with your money today than waiting to get them in the future (i.e., not per se right for majority of human beings but maybe for Homo Economicus).

First the sequence of future money value (discounted to the present) has the structure of a geometric series: , with yk+1 = g*yk (i.e., g* representing the change in y between two periods k and k+1).

Define and note that, thus in this framework we have that (note: I am doing all kind of “naughty” simplifications to not get too much trouble with the math).

The following relation is easy to realize:

, subtract the two equations from each other and the result is

. In the limit where n goes toward infinity (¥), providing that, it can be seen that .

It is often forgotten that this only is correct if and only if or in other words, if the discount rate (to present value) is higher than the future value growth rate.

You might often hear you finance folks (or M&A jockeys) talk about Terminal Value (they might also call it continuation value or horizon value … for many years I called it Termination Value … though that’s of course slightly out of synch with Homo Financius not to be mistaken for Homo Economicus :-).

with TV representing the Terminal Value and

NPV representing the net present value as calculated over a well-defined time span T.

 

I always found the Terminal Value fascinating as the size (matters?) or relative magnitude can be very substantial and frequently far greater than the NPV in terms of “value contribution” to the present value. Of course we do assume that our business model will survive to “Kingdom Come”. Appears to be a slightly optimistic assumptions (n’est pas mes amis? :-). We also assume that everything in the future is defined by the last year of cash-flow, the cash flow growth rate and our discount rate (hmmm don’t say that Homo Financius isn’t optimistic). Mathematically this is all okay (if ), economically maybe not so. I have had many and intense debates with past finance colleagues about the validity of Terminal Value. However, to date it remains a fairly standard practice to joggle up the enterprise value of a business model with a “bit” of Terminal Value.

Using the above (i.e., including our somewhat “naughty” simplifications)

It is easy to see why TV can be a very substantial contribution to the total value of a business model. The denominator (r-g) tends to be a lot smaller than 1 (i.e., note that always we have g<r) and though “blows” up the TV contribution to the present value (even when g is chosen to be zero).

Let’s evaluate the impact on uncertainty of the interest rate x, first re-write the NPV formula:

, yk is the cash-flow of time k (for the moment it remains unspecified), from

error/uncertainty propagation it is known that the standard deviation can be written as, where z=f(x,y,z,…) is a multi-variate function. Identifying the terms in the NPV formula is easy: z = Vn and

In the first approximation assume that x is the uncertain parameter, while yk is certain (i.e., ∆yk=0), then the following holds for the NPV standard deviation:

,

in the special case where yk is constant for all k’s,. It can be shown (similar analysis as above) that

with .

In the limit where n goes toward infinity, applying l’Hospital’s rule showing that , the following holds for propagating uncertainty/errors in the NPV formula:

Let’s take a numerical example, y=1, the interest rate x = 10% and the uncertainty/error is assumed to be no more than ∆x=3% (7%£ x £13%), assume that n®¥ (infinite time-horizon). Using the formula derived above NPV¥=11 and ∆NPV¥=±3.30 or a 30% error on estimated NPV. If the assumed cash-flows (i.e., yk) also uncertain the error will even be greater than 30%. The above analysis becomes more complex when yk is non-constant over time k and as yk to should be regarded as uncertain. The use of for example Microsoft Excel becomes rather useful to gain further insight (although the math is pretty fun too).


[1] This is likely due to the widespread use of MS Excel and financial pocket calculators allowing for easy NPV calculations, without the necessity for the user to understand the underlying mathematics, treating the formula as “black” box calculation. Note a common mistake using MS Excel NPV function is to include initial investment (t=0) in the formula – this is wrong the NPV formula starts with t=1. Thus, initial investment would be discounted which would lead to an overestimation of value.

[2] http://www.palisade-europe.com/. For purchases contact Palisade Sales & Training, The Blue House 30, Calvin Street, London E1 6NW, United Kingdom, Tel. +442074269955, Fax +442073751229.

[3] Sugiyama, S.O., “Risk Assessment Training using The Decision Tools Suite 4.5 – A step-by-step Approach” and “Introduction to Advanced Applications for Decision Tools Suite – Training Notebook – A step-by-step Approach”, Palisade Corporation. The Training Course as well as the training material itself can be highly recommended.

[4] Most people in general not schooled in probability theory, statistics and mathematical analysis. Great care should be taken to present matters in an intuitive rather than mathematical fashion.

[5] Hill, A., “Corporate Finance”, Financial Times Pitman Publishing, London, 1998.

[6]This result comes straight from geometric series calculus. Remember a geometric series is defined as, where is constant. For the NPV geometric series it can easily be shown that, r being the interest rate. A very important property is that the series converge if, which is the case for the NPV formula when the interest rate r>1. The convergent series sums to a finite value of for k starting at 1 and summed up to ¥ (infinite).

[7] Benninga, S., “Financial Modeling”, The MIT Press, Cambridge Massachusetts (2000), pp.27 – 52. Chapter 2 describes procedures for calculating cost of capital. This book is the true practitioners guide to financial modeling in MS Excel.

[8] Vose, D., “Risk Analysis A Quantitative Guide”, (2nd edition), Wiley, New York, 2000. A very competent book on risk modeling with a lot of examples and insight into competent/correct use of probability distribution functions.

[9] The number of scenario combinations are calculated as follows: an uncertain input variable can be characterized by the following possibility setwith length s, in case of k uncertain input variables the number of combinations can be calculated as , where is the COMBIN function of Microsoft Excel.

[10] A Monte Carlo simulation refers to the traditional method of sampling random (stochastic) variables in modeling. Samples are chosen completely randomly across the range of the distribution. For highly skewed or long-tailed distributions a large numbers of samples are needed for convergence. The @Risk product from Palisade Corporation (see http://www.palisade.com) supplies the perfect tool-box (Excel add-in) for converting a deterministic business model (or any other model) into a probabilistic one.

[11] Luehrman, T.A., “Investment Opportunities as Real Options: Getting Started with the Numbers”, Harvard Business Review, (July – August 1998), p.p. 3-15.

[12] Luehrman, T.A., “Strategy as a Portfolio of Real Options”, Harvard Business Review, (September-October 1998), p.p. 89-99.

[13] Providing that the business assumptions where not inflated to make the case positive in the first place.

[14] Hull, J.C., “Options, Futures, and Other Derivatives”, 5th Edition, Prentice Hall, New Jersey, 2003. This is a wonderful book, which provides the basic and advanced material for understanding options.

[15] A derivative is a financial instrument whose price depends on, or is derived from, the price of another asset.

[16] Boer, F.P., “The Valuation of Technology Business and Financial Issues in R&D”, Wiley, New York, 1999.

[17]  Amram, M., and Kulatilaka, N., “Real Options Managing Strategic Investment in an Uncertain World”, Harvard Business School Press, Boston, 1999. Non-mathematical, provides a lot of good insight into real options and qualitative analysis.

[18] Copeland, T., and V. Antikarov, “Real Options: A Practitioners Guide”, Texere, New York, 2001. While the book provides a lot of insight into the area of practical implementation of Real Options, great care should be taken with the examples in this book. Most of the examples are full of numerical mistakes. Working out the examples and correcting the mistakes provides a great mean of obtaining practical experience.

[19] Munn, J.C., “Real Options Analysis”, Wiley, New York, 2002.

[20] Amram. M., “Value Sweep Mapping Corporate Growth Opportunities”, Harvard Business School Press, Boston, 2002.

[21] Boer, F.P., “The Real Options Solution Finding Total Value in a High-Risk World”, Wiley, New York, 2002.

[22]] Black, F., and Scholes, M., “The Pricing of Options and Corporate Liabilities”, Journal of Political Economy, 81 (May/June 1973), pp. 637-659.

[23] Merton, J.C., “Theory of Rational Option Pricing”, Bell Journal of Economics and Management Science, 4 (Spring 1973), 141-183.

[24] Cox, J.C., Ross, S.A., and Rubinstein, M., “Option Pricing: A Simplified Approach”, Journal of Financial Economics, 7 (October 1979) pp. 229-63.

GSM – Gone So Much … or is it?

  • A Billion GSM subscriptions & almost $200 Billion GSM revenue will have gone within the next 5 years.
  • GSM earns a lot less than its “fair” share of the top-line, a trend that will further worsened going forward.
  • GSM revenue are fading out rapidly across a majority of the mobile markets across the Globe.
  • Accelerated GSM phase-out happens when pricing level of the next technology option relative to the GDP per capita drops below 2%.
  • 220 MHz of great spectrum is tied up in GSM, just waiting to be liberated.
  • GSM is horrific spectral in-efficient in comparison to today’s cellular standards.
  • Eventually we will have 1 GSM network across a given market, shared by all operators, supporting fringe legacy devices (e.g., M2M) while allowing operators to re-purpose remaining legacy GSM spectrum.
  • The single Shared-GSM network might survive past any economical justification for its existence merely serving legal and political interests.

Gone So Much … GSM is ancient, uncool and so 90s … why would anybody bother with that stuff any longer … its synonymous  with the Nokia Handset (which btw is also ancient, uncool and so 90s … and almost no longer among us thanks to our friend Elop …). In many emerging markets GSM-only phones are hardly demanded or sold any longer in the grey markets. Grey market that make up 90% (or more) of  handset sales in many of those emerging markets. Moreover, its not only AT&T in the US talking about 2G phase-out but also an emerging market such as Thailand is believed to be void of GSM within the next couple of years.

A bit of Personal History. Some years ago I had the privilege to work with some very smart people in the Telecom Industry on merging two very big mobile operations (ca.140 million in combined customer base). One of our cardinal spectrum strategic and technology arguments were the gain in spectral efficiency such a merger would bring. Anecdotally it is worth mentioning that the technology synergies and spectrum strategic ideas largely would have financed the deal in shear synergies.

In discussions with the country’s regulator we were asked why we could not “just” switch off GSM? Then use that freed GSM spectrum for new cellular technologies, such as UMTS and even LTE. Thereby gaining sufficient spectral efficiency that merging the two business would become un-necessary. The proposal would have effectively turned off the button of a service that served at ca. 70 Million GSM-only (incl, EDGE & GPRS) subscribers (at the time) across the country. Now that would have been expensive and most likely caused a couple or thousands of class action suits to the beat.

Here is how one could have thought about the process of clearing out GSM for something better (though overall its is more for richer and poorer). There is no “just …press the off button”, as also Sprint experienced with their iDEN migration.

Our thoughts (and submitted Declarations) were that by merging the two operators spectrum (and sites pool) we could create sufficient spectral capacity to support both GSM (which we all granted was phasing out) and provide more capacity and customer experience for the Now Generation Technology (i.e., HSPA+ or 4G as they like to call it in that particular market … Heretics! ;-). A recent must read GigaOM blog by Keith Fitchard  “AT&T begins cannibalizing 2G and 3G networks to boost LTE capacity” describes very well the aggressive no-nonsense thinking of US carriers (or simply desperation or both) when it comes to the quest for spectrum efficiency and enhanced customer experience (which co-incidentally also yields the best ARPUs).

It is worth mentioning that more than 2×110 Mega Hertz is tied up in GSM, Up-to 2×35 MHz at 900MHz (if E-GSM has been evoked) and 2×75 MHz at 1800MHz (yes! I am ignoring US GSM band plans, they are messed up but pretty fun nevertheless … different story for another time). Being able to re-purpose this amount of spectrum to more spectral efficient cellular technologies (e.g., UMTS Voice, HSPA+ and LTE) would clearly leapfrog mobile broadband, increase voice capacity at increased quality, and serve the current billions of GSM-only users as well as the next billion un-connected or under-server customer segments with The Internet. The macro-economical benefits being very substantial.

220 MHz of great spectrum is tied up in GSM, just waiting to be liberated.

Back in the days of 2003 I did my first detailed GSM phase-out techno-economical analysis (a bit premature one might add). I was very interested in questions such as “when can we switched off GSM?”, “what are the economical premises of exiting GSM?”, “Why do operators today still continue to encourage subscriber growth on their GSM networks?”, “Today … if you got your hands on GSM usable spectrum, would you start a GSM operation?”, “Why?” and “Why not?”, etc..

So why don’t we “just” switch off GSM? and let go of that old in-efficient cellular technology?

How in-efficient? you may ask? … Pending a little bit on what state the GSM is in, we can have ca. 3 times more voice users in WCDMA (i.e., UMTS) compared to GSM with Adaptive Multi-Rate (AMR) codec support. Newer technology releases supports even more dramatic leaps in voice handling capabilities.

Data? what about cellular data? That GSM, including its data handling enhancements GPRS and EDGE, is light-bits away from the data handling capabilities of WCDMA, HSPA+, LTE and so forth is at this point a well establish fact.

Clearly GSM is horrific spectral in-efficient in comparison to later cellular standards such as WCDMA, HSPA(+) and LTE(+) and its only light (in a very dark tunnel) is that it is supported at lower frequencies (i.e., more economical deployment in rural areas and for large surface area countries). Though today that no longer unique as UMTS and LTE are available in similar or even lower frequency ranges. … of course there are other economical issues at plays as well, which we will see below.

Why do we still bother with a 27+ year old technology? a technology that has very poor spectral efficiency in comparison with later cellular technologies. GSM after all “only” provides Voice, SMS and pretty low bandwidth mobile data (while better than nothing, still very close to nothing).

Well for one thing! there is of course the money thing? (and we know that that makes the world go around) ca. 4+ Billion GSM subscriptions worldwide (incl. GPRS & EDGE) generating a total GSM turnover of 280+ Billion US$.

In 2017 we anticipate to have a little less than 3 Billion GSM subscriptions generating ca. 100+ Billion US$. So ….a Billion GSM subscriptions and almost 200 Billion US$ GSM revenue will have dis-appeared within the next 5 years (and for the sake of mobile operators hopefully replaced by something better).

In this trend APAC, takes its lion share of the GSM subscription loss with ca. 65% (ca, 800 Million) of the total loss and ca. 50% of the GSM top-line loss (ca. 100 Billion US$).

The share of GSM revenue is rapidly declining across (almost) all markets;

The GSM revenue as share of the total revenue (as well as in absolute terms) rapidly diminishes, as 3G and LTE are introduced and customer migrate to those more modern technologies.

If the should be any doubts GSM does not get its fair share revenue compared to its share of the subscriptions (or subscribers for that matter):

While the above data does contain two main clusters, it still pretty well illustrates (what should be no real surprise to any one) that GSM earns back a lot less than its “fair” share (whatever that really means). And again if anyone would be in doubt that picture will be grimmer as the we fast forward to the near future;

Grim, Grimmer, Grimmest!

Today GSM earns a lot less than its “fair” share of the top-line, this trend will be further worsened going forward.

So we can soon phase-out GSM? Right? hmmmm! Maybe not so fast!

Well while GSM revenue has certainly declined and expected to continue the decline, in many markets the GSM-only (e.g., here defined as a customers that only have GSM Voice, GPRS and/or EDGE available) customers have not declined in proportion to the related revenue might fool us to believe.

The above statistics illustrates the GSM-only subscription share of the total cellular business.

There is more to GSM than market and revenue share … and we do need to have a look at the actual decline of GSM subscriptions (or unique users which is not per se the same) and revenue;

The GSM revenue are expected to massively free fall over the next 5 years!

However, also observe (in the chart above) that we need to sustain the network and its associated cost as a considerable amount of customers remain on the network, despite generating a lot less top line.

As we have already seen above, in the next 5 years there will be many markets where GSM subscription and subscriber share will remain reasonable strong albeit the technology’s ability to turn-over revenue will be in free-fall in most markets.

Analyzing data from Pyramid Research (actual & projection for the period 2013 to 2017), including other analyst data sets (particular on actual data), extrapolating the data beyond 2017 by diffusion models approximating the dynamics of technology migration in the various market, we can get an idea about the remaining (residual) life of GSM. In other words we can make GSM phase-out projections as well as get a feel for the terminal revenue (or residual value) left in GSM. Further get an appreciation of how that terminal value compares to the total mobile turnover over the same GSM phase-out period.

The chart below provide the results of such a comprehensive analysis. The colored bars illustrate the various years of onset of GSM phase-out; (a) the earliest year which is equal to the lower end of light-blue bar is typically the year where migration off GSM accelerates, (b) the upper end of the light-blue bar is a most-likely year where after GSM no longer would be profitable, and (c) the upper end of the red bar illustrates the maximum expected life of GSM. It should be noted that the GSM Phase-out chart below might not be shown in its entirety (particular right side of the chart). Clicking on the Chart itself will display it in full.

Taking the above GSM phase-out years, we can get a feeling for how many useful years GSM has left in terms of economical-life and customer life-time defined as which event comes first of (i) less than 1 Million GSM subscriptions or (ii) 5% GSM market-share. 2014 has been taken as the reference year;

It should be noted that the Useful Life-span of GSM chart above might not be shown in its entirety (particular right side of the chart). Clicking on the Chart itself will display it in full.

AREAS #MARKETS GSM –
REMAINING LIFE
Western Europe               16       4.1 +/- 3.3 years
Asia Pacific               13       6.4 +/- 5.0 years
Middle East & Africa               17     11.0 +/- 6.2 years
Central Eastern Europe                 8       6.9 +/- 4.8 years
Latin America               19       6.6 +/- 3.7 years

That Western Europe (and US which has not been shown here) has the most aggressive time-lines for GSM phase-out should come as no surprise. The 3G/UMTS has been deployed there the longest and the 3G price level to GDP has come down to a level where there is hardly any barrier for most mobile users to switch from GSM to UMTS. Also the WEU region has the most extensive UMTS coverage which also removes the GSM to UMTS switching barrier. Central Eastern Europe average is pulled up (i.e., longer useful life) substantial by Russia and Ukraine that shows fairly extreme laggardness in GSM phase-out (in comparison with the other CEE markets). For Middle East and Africa it should be noted that there are two very strong clusters of data distinguishing the Gulf States from the African Countries. Most of the Gulf States have only a very few years of remaining useful life of GSM. In general the GSM remaining life trend can be described fairly well with the amount of time UMTS has been in a given market (i.e., though smartphone introduction did kick-start the migration from GSM more than anything else), the extend of UMTS coverage (i.e., degree of pop and geo coverage) and the basic economics of UMTS.

In my analysis I have assumed 4 major triggers for GSM phase-out;

  1. Analysis shows that once the 3G (or non-2G) ARPU is below 2% of the nominal GDP per capita an acceleration of migration away from GSM speeds up. I have (somewhat arbitrarily) chosen 1% as my limit where there is no longer any essential barrier of customer migrating off GSM.
  2. When GSM penetration is below 5% as a decision point for converting (by possible subsidies) GSM customers to a more modern and efficient technology. This obviously does depend on total customer base and the local economical framework and as such is only a heuristics rather than a universal rule.
  3. Last but not least, my 3rd criteria for phasing out GSM is when its base is below 1 million subscriptions (i.e., typically 500 to 800 thousand subscribers).
  4. Last but not least, before complete phase-out of GSM can commence, operators obviously need to provide the alternative technology (e.g., UMTS or LTE) coverage that can replace the existing GSM coverage. This is in general only economical if comparable frequency range can be used and thus for example for UMTS coverage replacement of GSM in many cases re-farming/re-purposing 900MHz from GSM to UMTS. This last point can be a very substantial bottleneck and show stopper for migration from GSM to UMTS, particular in rural areas or in countries with very substantial rural populations on GSM.

Interestingly enough, extensive data analysis on more than 70 markets, shows that the GSM phase—out dynamics appears to have little or no dependency on (a) the 2G ARPU level, (b) 2G ARPU level relative to 3G ARPU and (c) handset pricing (although I should point out that I have not had a lot of data here to be firm in this conclusion, in particular reliable data for grey market handset pricing across the emerging markets is a challenge).

One of the important trigger points for onset of accelerated GSM phase-out is the pricing level of the next technology (e.g., 3G) option relative to the GDP per capita.

Migration decision appears less to do with the legacy price of the old technology or old technology price relative to new technology pricing.

Above chart illustrates an analysis made on 2012 actual data for more than 70+ markets all across WEU, CEE, APAC, EMEA and LA (i.e., coinciding with markets covered by Pyramid Research). It is very interesting to observe the dynamics as the markets develop into the future and the data moves towards the left indicating more affordable 3G pricing (relative to GDP per capita) and increasingly faster GSM phase-out as is evident from the chart below providing the same markets as above but fast forwarded 5 years (i.e., 2017).

Firstly the GSM ARPU level across most markets is below 2% of a given markets GDP per capita. There is no clear evidence in the country data available that the GSM ARPU development has had any effect on slowing down or accelerating GSM phase-out. Most likely an indication that GSM has reached (or will reach shortly) a cost level where customers become insensitive.

Conceptually we can visualize the GSM phase-out dynamics in the following way were as the 3G gets increasingly affordable (which may or should include the device cost depending on taste), GSM phase-out accelerates (i.e., moving from right to left in the illustrative chart below). While the chart illustration below is more attuned to emerging market migration dynamics of GSM phase-out it can of course with minor adaptations be used for other more balanced prepaid-postpaid markets.

We should keep in mind that unless the mobile operators new technology coverage (e.g., UMTS, LTE, ..) at the very least overlap the GSM coverage, the migration from GSM to UMTS (or LTE) will eventually stop. This can in countries with a substantial rural population in particular become a blocking stone for an effective 100% migration. Resulting in large areas and population share that will remain underserved (i.e., only GSM available) and thus depend on an in-efficient and ancient technology without the macro-economical benefits (i.e., boost of rural GDP) new and far more efficient cellular technologies could bring.

That’s all fine … what a surprise that customers wants better when it gets affordable (like to have wanted that even more when it was not affordable)… and that affordability is relative is hardly a surprising either.

In order for an operator to make an informed opinion about when to switch off GSM, it would need to evaluated the remaining business opportunity, or residual GSM value, against the value for re-purposing the GSM spectrum to a better technology, i.e., with a superior customer experience potential, and with a substantial higher ARPU utilization.

Counting from 2014, the remaining life-time aka terminal aka residual GSM revenue will be in the order of 850 Billion US$ … agreeable an apparently dramatic number … however, the residual GSM revenue is on average no more than 5% of total cellular turnover and for many countries a lower than that. Actual 45 markets out of the 73 studied will have a terminal GSM revenue lower than 5%.

The chart below provides an overview of the Residual GSM Revenues in Billion of US$ (on a logarithmic scale) and the percentage of Residual GSM value out of the total cellular turnover (linear scale) for 75 top markets spread across Western Europe, Central Eastern Europe, Asia Pacific, Middle East & Africa, and Latin America.

Do note that the GSM Terminal Revenue chart above might not be shown in its entirety (right side of the chart). Clicking on the Chart itself will display it in full.

It is quiet clear from the above chart that, apart from a few outliers, GSM revenue are fading out rapidly across a majority of the mobile markets across the globe. Even if the residual GSM topline might appear tempting, it obviously need to be compared to the operating expenses for sustaining the legacy technology as well as considering that a more modern technology would create higher efficiency (and possible ARPU arbitrage) and therefor mitigate margin decline sustaining more traffic and customers.

Emerging APAC MNO Example: an emerging market in APAC has 100 Million subscriptions and ca. 70 Million unique cellular user base.One of the Mobile Network Operators (MNO) in this market has approx. 33% market share (revenue share slightly larger). in 2012 its EBITDA margin was 42%. Technology cost share of overall Opex is 25% and for the sake of simplicity the corresponding GSM cost share is in 2012 assumed to be 50% of the Total Technology Opex. As the business evolves it is assumed that the GSM cost base grows slower than non-GSM technology cost elements. This particular market has a residual GSM revenue potential of approx. 4 Billion US$ and the MNO under the loop has 1.3 Billion US$ remaining GSM revenue potential.

Our analysis shows that the GSM business would start to breakdown (within the assumed economical framework or template) at around 5 Million GSM subscriptions or 3.5 Million unique users. This would happen around 2019 (+/- 2 years, with a bias towards earlier years) and thus leave the business with another 3 to 5 years of likely profitable GSM operation. See the chart below.

This illustration shows (not surprisingly) that there is a point where even if the phasing-out GSM turns-over revenue, from an economical perspective it makes no sense for a single mobile operator to keep its GSM network alive for a diminishing customer base and even faster evaporating top-line.

In the example above it is clear that the MNO should start planning for the inevitable – the demise of GSM. Having a clear GSM phase-out strategy as soon as possible and targeting GSM termination no later than 2018 to 2019 just makes pretty good sense. Looking at risks to the dynamics of the market development in this particular market there is a higher likelihood of no-profit being reached earlier rather than later.

Would it make sense to startup a new GSM business in the market above? Given the 3 to 5 years that the existing mobile operators have to meet retire GSM before it becomes un-profitable, it hardly make much sense for a Greenfield operator to get started on the GSM idea (seem to be better ways for spending cash).

However, if that Greenfield operator could become The GSM Operator for all existing MNO players in the market, allowing those legacy MNOs to re-purpose their existing GSM spectrum (and possible with a retro-active wholesale deal), then maybe in the short term it might make a little sense. However, it quiet frankly would be like peeing in your trousers on a cold winter day, it will be warm for a short while but then it really gets cold (as my Grandmother used to say).

What GSM strategies makes really sense in its autumn days?

Quit clearly GSM Network Sharing would make a lot of sense economically and operationally as it would allow re-purposing of legacy spectrum to more modern and substantially more efficient cellular technologies.

The single Shared-GSM network would act as a bridge for legacy GSM M2M devices, extreme laggards and problematic coverage areas that might not be economical to replace in the shorter – medium term. Thus mobile operators could then solve possible long-term contractual obligations to businesses and consumers having fringe devices connecting with GSM (i.e., metering, alarms, etc..). The single Shared-GSM network might very well survive for a considerable time past any economical justification for its existence merely serving legal and political interests. Thanks to Stein Erik Paulsen who pointed this problem out for GSM phase-out.

I am not (too) hanged up about the general Capex & Opex benefits of Network Sharing in this context (yet another story for another day). The compelling logical step of having 1 (ONE) GSM network across a given market, shared by all operators, supporting the phase-out of GSM while allowing to re-purpose legacy GSM spectrum for UMTS/HSPA and eventually  LTE(+), is almost screamingly obvious. This furthermore would feed a faster migration pace and phase-out as legacy spectrum would be available for re-purposing and customer migration.

Of course Regulatory authorities would need to endorse such a scenario as it de-facto would result in a smelling-like creating a monopolistic GSM operator albeit serving all in a given market.

The Regulatory Authority should obviously be very interested in this strategy as it would ensure substantial better utilization  of scarce spectral resources.  Furthermore, not only gaining in spectral efficiency but also winning the macro-economical boost from connecting the unconnected and under-served population groups to mobile data networks, and by that, the internet.

ACKNOWLEDGEMENT

I have made extensive use of historical and actual data from Pyramid Research country data bases. Wherever possible this data has been cross checked with other sources. In my opinion Pyramid Research have some of the best and most detailed mobile technology projections that would satisfy even the most data savvy analysts. The very extensive data analysis on Pyramid Research data sets are my own and any short falls in the analysis clearly should only be attributed to myself.

SMS – Assimilation is inevitable, Resistance is Futile!

Short Message Service or SMS for short, one of the corner stones of mobile services, just turned 20 years old in 2012.

Talk about “Live Fast, Die Young” and the chances are that you are talking about SMS!

The demise of SMS has already been heralded … Mobile operators rightfully are shedding tears of the (taken-for-granted?) decline of the most profitable 140 Bytes there ever was and possible ever will be.

Before we completely kill off SMS, let’s have a brief look at

SMS2012

The average SMS user (across the world) consumed 136 SMS (ca. 19kByte) per month and paid 4.6 US$-cent per SMS and 2.6 US$ per month. Of course this is a worldwide average and should not be over interpreted. For example in the Philippines an average SMS user consumes 650+ SMS per month pays 0.258 US$-cent per SMS or 1.17 $ per month.The other extreme end of the SMS usage distribution we find in Cameroon with 4.6 SMS per month paying 8.19 US$-cent per SMS.

We have all seen the headlines throughout 2012 (and better part of 2011) of SMS Dying, SMS Disaster, SMS usage dropping and revenues being annihilated by OTT applications offering messaging for free, etcetcetc… & blablabla … “Mobile Operators almost clueless and definitely blameless of the SMS challenges” … Right? … hmmmm maybe not so fast!

All major market regions (i.e., WEU, CEE, NA, MEA, APAC, LA) have experienced a substantial slow down of SMS revenues in 2011 and 2012. A trend that is expected to continue and accelerate with mobile operators push for mobile broadband. Last but not least SMS volumes have slowed down as well (though less severe than the revenue slow down) as signalling-based short messaging service assimilates to IP-based messaging via mobile applications.

Irrespective of all the drama! SMS phase-out is obvious (and has been for many years) … with the introduction of LTE, SMS will be retired.

Resistance is (as the Borg’s would say) Futile!

It should be clear that the phase out of SMS does Absolutely Not mean that messaging is dead or in decline. Far far from it!

Messaging is Stronger than Ever and just got so many more communication channels beyond the signalling network of our legacy 2G & 3G networks.

Its however important to understand how long the assimilation of SMS will take and what drivers impact the speed of the SMS assimilation. From an operator strategic perspective such considerations will provide insights into how quickly they will need to replace SMS Legacy Revenues with proportional Data Revenues or suffer increasingly on both Top and Bottom line.

SMS2012 AND ITS GROWTH DYNAMICS

So lets just have a look at the numbers (with the cautionary note that some care needs to be taken with exchange rate effects between US Dollar and Local Currencies across the various markets being wrapped up in a regional and a world view. Further, due to the structure of bundling propositions, product-based revenues such as SMS Revenues, can be and often are somewhat uncertain depending on the sophistication of a given market):

2012 is expected worldwide to deliver more than 100 billion US Dollars in SMS revenues on more than 7 trillion revenue generating SMS.

The 100 Billion US Dollars is ca. 10% of total worldwide mobile turnover. This is not much different from the 3 years prior and 1+ percentage-point up compared to 2008. Data revenues excluding SMS is expected in 2012 to be beyond 350 Billion US Dollar or 3.5 times that of SMS Revenues or 30+% of total worldwide mobile turnover (5 years ago this was 20% and ca. 2+ times SMS Revenues).

SMS growth has slowed down over the last 5 years. Last 5 years SMS revenues CAGR was ca. 7% (worldwide). Between 2011 and 2012 SMS revenue growth is expected to be no more than 3%. Western Europe and Central Eastern Europe are both expected to generate less SMS revenues in 2012 than in 2011. SMS Volume grew with more than 20% per annum the last 5 years but generated SMS in 2012 is not expected to more than 10% higher than 2012.

For the ones who like to compare SMS to Data Consumption (and please safe us from ludicrous claims of the benefits of satellites and other ideas out of too many visits to Dutch Coffee shops)

2012 SMS Volume corresponds to 2.7 Terra Byte of daily data (not a lot! Really it is not!)

Don’t be terrible exited about this number! It is like Nano-Dust compared to the total mobile data volume generated worldwide.

The monthly Byte equivalent of SMS consumption is no more than 20 kilo Byte per individual mobile user in Western Europe.

Let us have a look at how this distributes across the world broken down in Western Europe (WEU), Central Eastern Europe (CEE), North America (NA), Asia Pacific (APAC), Latin America (LA) and Middle East & Africa (MEA):

From the above chart we see that

Western Europe takes almost 30% of total worldwide SMS revenues but its share of total SMS generated is less than 10%.

And to some extend also explains why Western Europe might be more exposed to SMS phase out than some other markets. We have already seen the evidence of Western Europe sensitivity to SMS revenues back in 2011, a trend that will spread in many more markets in 2012 and lead to an overall negative SMS revenue story of Western Europe in 2012. We will see that within some of the other regions there are countries that substantially more exposed to SMS phase-out than others in terms of SMS share of total mobile turnover.

In Western Europe a consumer would  for an SMS pay more than 7 times the price compared to a consumer in North America (i.e., Canada or USA). It is quiet clear that Western Europe has been very successful in charging for SMS compared to any other market in the World. An consumers have gladly paid the price (well I assume so;-).

SMS Revenues in Western Europe are proportionally much more important in Western Europe than in other regions (maybe with the exception of Latin America).

In 2012 17% of Total Western Europe Mobile Turnover is expected to come from SMS Revenues (was ca. 13% in 2008).

WHAT DRIVES SMS GROWTH?

It is interesting to ask what drives SMS behaviour across various markets and countries.

Prior to reasonable good quality 3G networks and as importantly prior to the emergence of the Smartphone the SMS usage dynamics between different markets could easily be explained by relative few drivers, such as

(1) Price decline year on year (the higher decline the faster does SMS per user grow, though rate and impact will depend on Smartphone penetration & 3G quality of coverage).

(2) Price of an SMS relative to the price of a Minute (the lower the more SMS per User, in many countries there is a clear arbitrage in sending an SMS versus making a call which on average last between 60 – 120 seconds).

(3) Prepaid to Contract ratios (higher prepaid ratios tend to result in fewer SMS, though this relationship is not per se very strong).

(4) SMS ARPU to GDP (or average income if available) (The lower the higher higher the usage tend to be).

(5) 2G penetration/adaptation and

(6) literacy ratios (particular important in emerging markets. the lower the literacy rate is the lower the amount of SMS per user tend to be).

Finer detailed models can be build with many more parameters. However, the 6 given here will provide a very decent worldview of SMS dynamics (i.e., amount and growth) across countries and cultures. So for mature markets we really talk about a time before 2009 – 2010 where Smartphone penetration started to approach or exceed 20% – 30% (beyond which the model becomes a bit more complex).

In markets where the Smartphone penetration is beyond 30% and 3G networks has reached a certain coverage quality level the models describing SMS usage and growth changes to include Smartphone Penetration and to a lesser degree 3G Uptake (not Smartphone penetration and 3G uptake are not independent parameters and as such one or the other often suffice from a modelling perspective).

Looking SMS usage and growth dynamics after 2008, I have found high quality statistical and descriptive models for SMS growth using the following parameters;

(a) SMS Price Decline.

(b) SMS price to MoU Price.

(c) Prepaid percentage.

(d) Smartphone penetration (Smartphone penetration has a negative impact on SMS growth and usage – unsurprisingly!)

(e) SMS ARPU to GDP

(f) 3G penetration/uptake (Higher the 3G penetration combined with very good coverage has a negative impact on SMS growth and usage. Less important though than Smartphone penetration).

It should be noted that each of these parameters are varying with time and there for in extracting those from a comprehensive dataset time variation should be considered in order to produce a high quality descriptive model for SMS usage and growth.

If a Market and its Mobile Operators would like to protect their SMS revenues or at least slow down the assimilation of SMS, the mobile operators clearly need to understand whether pushing Smartphones and Mobile Data can make up for the decline in SMS revenues that is bound to happen with the hard push of mobile broadband devices and services.

EXPOSURE TO LOSS OF SMS REVENUE – A MARKET BY MARKET VIEW!

As we have already seen and discussed it is not surprising that SMS is declining or stagnating. At least within its present form and business model. Mobile Broadband, the Smartphone and its many applications have created a multi-verse of alternatives to the SMS. Where in the past SMS was a clear convenience and often a much cheaper alternative to an equivalent voice call, today SMS has become in-convenient and not per se a cost-efficient alternative to Voice and certainly not when compared with IP-based messaging via a given data plan.

74 countries (or markets) have been analysed for their exposure to SMS decline in terms of the share of SMS Revenues out of the Total Mobile Turnover. 4 categories have been identified (1) Very high risk >20%, (2) High risk for 10% – 20%, (3) Medium risk for 5% – 10% and (4) Lower risk when the SMS Revenues are below 5% of total mobile turnover.

As Mobile operators push hard for mobile broadband and inevitably increases rapidly the Smartphone penetration, SMS will decline. In the “end-game” of LTE, SMS has been altogether phased out.

Based on 2012 expectations lets look at the risk exposure that SMS phase-out brings in a market by market out-look;

We see from the above analysis that 9 markets (out of a total 74 analyzed), with Philippines taking the pole position, are having what could be characterized as a very high exposure to SMS Decline. The UK market, with more than 30% of revenues tied up in SMS, have aggressively pushed for mobile broadband and LTE. It will be very interesting to follow how UK operators will mitigate the exposure to SMS decline as LTE is penetrating the market.  We will see whether LTE (and other mobile broadband propositions) can make up for the SMS decline.

More than 40 markets have an SMS revenue dependency of more than 10% of total mobile turnover and thus do have a substantial exposure to SMS decline that needs to be mitigated by changes to the messaging business model.

Mobile operators around the world still need to crack this SMS assimilation challenge … a good starting point would be to stop blaming OTT for all the evils and instead either manage their mobile broadband push and/or start changing their SMS business model to an IP-messaging business model.

IS THERE A MARGIN EXPOSURE BEYOND LOSS OF SMS REVENUES?

There is no doubt that SMS is a high-margin service, if not the highest, for The Mobile Industry.

A small de-tour into the price for SMS and the comparison with the price of mobile data!

The Basic: an SMS is 140 Bytes and max 160 characters.

On average (worldwide) an SMS user pays (i.e., in 2012) ca. 4.615 US$-cent per short message.

A Mega-Byte of data is equivalent to 7,490 SMSs which would have a “value” of ca. 345 US Dollars.

Expensive?

Yes! It would be if that was the price a user would pay for mobile broadband data (particular for average consumptions of 100 Mega Bytes per month of Smartphone consumption) …

However, remember that an average user (worldwide) consumes no more than 20 kilo Byte per Month.

One Mega-Byte of SMS would supposedly last for more than 50 month or more than 4 years.

This is just to illustrate the silliness of getting into SMS value comparison with mobile data.

A Byte is not just a Byte but depends what that Byte caries!

Its quiet clear that an SMS equivalent IP-based messaging does not pose much of a challenge to a mobile broadband network being it either HSPA-based or LTE-based. To some extend IP-based messaging (as long as its equivalent to 140 Bytes) should be able to be delivered at better or similar margin as in a legacy based 2G mobile network.

Thus, in my opinion a 140 Byte message should not cost more to deliver in an LTE or HSPA based network. In fact due to better spectral efficiency and at equivalent service levels, the cost of delivering 140 Bytes in LTE or HSPA should be a lot less than in GSM (or CS-3G).

However, if the mobile operators are not able to adapt their messaging business models to recover the SMS revenues (which with the margin argument above might not be $ to $ recovery but could be less) at risk of being lost to the assimilation process of pushing mobile data … well then substantial margin decline will be experienced.

Operators in the danger zone of SMS revenue exposure, and thus with the SMS revenue share exceeding 10% of the total mobile turnover, should urgently start strategizing on how they can control the SMS assimilation process without substantial financial loss to their operations.

ACKNOWLEDGEMENT

I have made extensive use of historical and actual data from Pyramid Research country data bases. Wherever possible this data has been cross checked with other sources. Pyramid Research have some of the best and most detailed mobile technology projections that would satisfy most data savvy analysts. The very extensive data analysis on Pyramid Research data sets are my own and any short falls in the analysis clearly should only be attributed to myself.

The Economics of the Thousand Times Challenge: Spectrum, Efficiency and Small Cells

By now the biggest challenge of the “1,000x challenge” is to read yet another story about the “1,000x challenge”.

This said, Qualcomm has made many beautiful presentations on The Challenge. It leaves the reader with an impression that it is much less of a real challenge, as there is a solution for everything and then some.

So bear with me while we take a look at the Economics and in particular the Economical Boundaries around the Thousand Times “Challenge” of providing (1) More spectrum, (2) Better efficiency and last but not least (3) Many more Small Cells.

THE MISSING LINK

While (almost) every technical challenge is solvable by clever engineering (i.e., something Qualcomm obviously have in abundance), it is not following naturally that such solutions are also feasible within the economical framework imposed by real world economics. At the very least, any technical solution should also be reasonable within the world of economics (and of course within a practical time-frame) or it becomes a clever solution but irrelevant to a real world business.

A  Business will (maybe should is more in line with reality) care about customer happiness. However a business needs to do that within healthy financial boundaries of margin, cash and shareholder value. Not only should the customer be happy, but the happiness should extend to investors and shareholders that have trusted the Business with their livelihood.

While technically, and almost mathematically, it follows that massive network densification would be required in the next 10 years IF WE KEEP FEEDING CUSTOMER DEMAND it might not be very economical to do so or at the very least such densification only make sense within a reasonable financial envelope.

Its obvious that massive network densification, by means of macro-cellular expansion, is unrealistic, impractically as well as uneconomically. Thus Small Cell concepts including WiFi has been brought to the Telecoms Scene as an alternative and credible solution. While Small Cells are much more practical, the question whether they addresses sufficiently the economical boundaries, the Telecommunications Industry is facing, remains pretty much unanswered.

PRE-AMP

The Thousand Times Challenge, as it has been PR’ed by Qualcomm, states that the cellular capacity required in 2020 will be at least 1,000 times that of “today”. Actually, the 1,000 times challenge is referenced to the cellular demand & supply in 2010, so doing the math

the 1,000x might “only” be a 100 times challenge between now and 2020 in the world of Qualcomm’s and alike. Not that it matters! … We still talk about the same demand, just referenced to a later (and maybe less “sexy” year).

In my previous Blogs, I have accounted for the dubious affair (and non-nonsensical discussion) of over-emphasizing cellular data growth rates (see “The Thousand Times Challenge: The answer to everything about mobile data”) as well as the much more intelligent discussion about how the Mobile Industry provides for more cellular data capacity starting with the existing mobile networks (see “The Thousand Time Challenge: How to provide cellular data capacity?”).

As it turns out  Cellular Network Capacity C can be described by 3 major components; (1) available bandwidth B, (2) (effective) spectral efficiency E and (3) number of cells deployed N.

The SUPPLIED NETWORK CAPACITY in Mbps (i.e., C) is equal to  the AMOUNT OF SPECTRUM, i.e., available bandwidth, in MHz (i..e, B) multiplied with the SPECTRAL EFFICIENCY PER CELL in Mbps/MHz (i.e., E) multiplied by the NUMBER OF CELLS (i.e., N). For more details on how and when to apply the Cellular Network Capacity Equation read my previous Blog on “How to provide Cellular Data Capacity?”).

SK Telekom (SK Telekom’s presentation at the 3GPP workshop on “Future Radio in 3GPP” is worth a careful study) , Mallinson (@WiseHarbor) and Qualcomm (@Qualcomm_tech, and many others as of late) have used the above capacity equation to impose a Target amount of cellular network capacity a mobile network should be able to supply by 2020: Realistic or Not, this target comes to a 1,000 times the supplied capacity level in 2010 (i.e., I assume that 2010 – 2020 sounds nicer than 2012 – 2022 … although the later would have been a lot more logical to aim for if one really would like to look at 10 years … of course that might not give 1,000 times which might ruin the marketing message?).

So we have the following 2020 Cellular Network Capacity Challenge:

Thus a cellular network in 2020 should have 3 times more spectral bandwidth B available (that’s fairly easy!), 6 times higher spectral efficiency E (so so … but not impossible, particular compared with 2010) and 56 times higher cell site density N (this one might  be a “real killer challenge” in more than one way), compared to 2010!.

Personally I would not get too hanged up about whether its 3 x 6 x 56 or 6 x 3 x 56 or some other “multiplicators” resulting in a 1,000 times gain (though some combinations might be a lot more feasible than others!)

Obviously we do NOT need a lot of insights to see that the 1,000x challenge is a

Rally call for Small & then Smaller Cell Deployment!

Also we do not need to be particular visionary (or have visited a Dutch Coffee Shop) to predict that by 2020 (aka The Future) compared to today (i.e., October 2012)?

Data demand from mobile devices will be a lot higher in 2020!

Cellular Networks have to (and will!) supply a lot more data capacity in 2020!

Footnote: the observant reader will have seen that I am not making the claim that there will be hugely more data traffic on the cellular network in comparison to today. The WiFi path might (and most likely will) take a lot of the traffic growth away from the cellular network.

BUT

how economical will this journey be for the Mobile Network Operator?

THE ECONOMICS OF THE THOUSAND TIMES CHALLENGE

Mobile Network Operators (MNOs) will not have the luxury of getting the Cellular Data Supply and Demand Equation Wrong.

The MNO will need to balance network investments with pricing strategies, churn & customer experience management as well as overall profitability and corporate financial well being:

Growth, if not manage, will lead to capacity & cash crunch and destruction of share holder value!

So for the Thousand Times Challenge, we need to look at the Total Cost of Ownership (TCO) or Total Investment required to get to a cellular network with 1,000 times more network capacity than today. We need to look at:

Investment I(B) in additional bandwidth B, which would include (a) the price of spectral re-farming (i.e., re-purposing legacy spectrum to a new and more efficient technology), (b) technology migration (e.g., moving customers off 2G and onto 3G or LTE or both) and (c) possible acquisition of new spectrum (i..e, via auction, beauty contests, or M&As).

Improving a cellular networks spectral efficiency I(E) is also likely to result in additional investments. In order to get an improved effective spectral efficiency, an operator would be required to (a) modernize its infrastructure, (b) invest into better antenna technologies, and (c) ensure that customer migration from older spectral in-efficient technologies into more spectral efficient technologies occurs at an appropriate pace.

Last but NOT Least the investment in cell density I(N):

Needing 56 times additional cell density is most likely NOT going to be FREE,

even with clever small cell deployment strategies.

Though I am pretty sure that some will make a very positive business case, out there in the Operator space, (note: the difference between Pest & Cholera might come out in favor of Cholera … though we would rather avoid both of them) comparing a macro-cellular expansion to Small Cell deployment, avoiding massive churn in case of outrageous cell congestion, rather than focusing on managing growth before such an event would occur.

The Real “1,000x” Challenge will be Economical in nature and will relate to the following considerations:

In other words:

Mobile Networks required to supply a 1,000 times present day cellular capacity are also required to provide that capacity gain at substantially less ABSOLUTE Total Cost of Ownership.

I emphasize the ABSOLUTE aspects of the Total Cost of Ownership (TCO), as I have too many times seen our Mobile Industry providing financial benefits in relative terms (i.e., relative to a given quality improvement) and then fail to mention that in absolute cost the industry will incur increased Opex (compared to pre-improvement situation). Thus a margin decline (i.e., unless proportional revenue is gained … and how likely is that?) as well as negative cash impact due to increased investments to gain the improvements (i.e., again assuming that proportional revenue gain remains wishful thinking).

Never Trust relative financial improvements! Absolutes don’t Lie!

THE ECONOMICS OF SPECTRUM.

Spectrum economics can be captured by three major themes: (A) ACQUISITION, (B) RETENTION and (C) PERFECTION. These 3 major themes should be well considered in any credible business plan: Short, Medium and Long-term.

It is fairly clear that there will not be a lot new lower frequency (defined here as <2.5GHz) spectrum available in the next 10+ years (unless we get a real breakthrough in white-space). The biggest relative increase in cellular bandwidth dedicated to mobile data services will come from re-purposing (i.e., perfecting) existing legacy spectrum (i.e., by re-farming). Acquisition of some new bandwidth in the low frequency range (<800MHz), which per definition will not be a lot of bandwidth and will take time to become available. There are opportunities in the very high frequency range (>3GHz) which contains a lot of bandwidth. However this is only interesting for Small Cell and Femto Cell like deployments (feeding frenzy for small cells!).

As many European Countries re-auction existing legacy spectrum after the set expiration period (typical 10 -15 years), it is paramount for a mobile operator to retain as much as possible of its existing legacy spectrum. Not only is current traffic tied up in the legacy bands, but future growth of mobile data will critical depend on its availability. Retention of existing spectrum position should be a very important element of an Operators  business plan and strategy.

Most real-world mobile network operators that I have looked at can expect by acquisition & perfection to gain between 3 to 8 times spectral bandwidth for cellular data compared to today’s situation.

For example, a typical Western European MNO have

  1. Max. 2x10MHz @ 900MHz primarily used for GSM. Though some operators are having UMTS 900 in operation or plans to re-farm to UMTS pending regulatory approval.
  2. 2×20 MHz @ 1800MHz, though here the variation tend to be fairly large in the MNO spectrum landscape, i.e., between 2x30MHz down-to 2x5MHz. Today this is exclusively in use for GSM. This is going to be a key LTE band in Europe and already supported in iPhone 5 for LTE.
  3. 2×10 – 15 MHz @ 2100MHz is the main 3G-band (UMTS/HSPA+) in Europe and is expected to remain so for at least the next 10 years.
  4. 2×10 @ 800 MHz per operator and typically distributed across 3 operator and dedicated to LTE. In countries with more than 3 operators typically some MNOs will have no position in this band.
  5. 40 MHz @ 2.6 GHz per operator and dedicated to LTE (FDD and/or TDD). From a coverage perspective this spectrum would in general be earmarked for capacity enhancements rather than coverage.

Note that most European mobile operators did not have 800MHz and/or 2.6GHz in their spectrum portfolios prior to 2011. The above list has been visualized in the Figure below (though only for FDD and showing the single side of the frequency duplex).

The 700MHz will eventually become available in Europe (already in use for LTE in USA via AT&T and VRZ) for LTE advanced. Though the time frame for 700MHz cellular deployment in Europe is still expected take maybe up to 8 years (or more) to get it fully cleared and perfected.

Today (as of 2012) a typical European MNO would have approximately (a) 60 MHz (i.e., DL+UL) for GSM, (b) 20 – 30 MHz for UMTS and (c) between 40MHz – 60MHz for LTE (note that in 2010 this would have been 0MHz for most operators!). By 2020 it would be fair to assume that same MNO could have (d) 40 – 50 MHz for UMTS/HSPA+ and (e) 80MHz – 100MHz for LTE. Of course it is likely that mobile operators still would have a thin GSM layer to support roaming traffic and extreme laggards (this is however likely to be a shared resource among several operators). If by 2020 10MHz to 20MHz would be required to support voice capacity, then the MNO would have at least 100MHz and up-to 130MHz for data.

Note if we Fast-Backward to 2010, assume that no 2.6GHz or 800MHz auction had happened and that only 2×10 – 15 MHz @ 2.1GHz provided for cellular data capacity, then we easily get a factor 3 to 5 boost in spectral capacity for data over the period. This just to illustrate the meaningless of relativizing the challenge of providing network capacity.

So what’s the economical aspects of spectrum? Well show me the money!

Spectrum:

  1. needs to be Acquired (including re-acquired = Retention) via (a) Auction, (b) Beauty contest or (c) Private transaction if allowed by the regulatory authorities (i.e., spectrum trading); Usually spectrum (in Europe at least) will be time-limited right-to-use! (e.g., 10 – 15 years) => Capital investments to (re)purchase spectrum.
  2. might need to be Perfected & Re-farmed to another more spectral efficient technology => new infrastructure investments & customer migration cost (incl. acquisition, retention & churn).
  3. new deployment with coverage & service obligations => new capital investments and associated operational cost.
  4. demand could result in joint ventures or mergers to acquire sufficient spectrum for growth.
  5. often has a re-occurring usage fee associate with its deployment => Operational expense burden.

First 3 bullet points can be attributed mainly to Capital expenditures and point 5. would typically be an Operational expense. As we have seen in US with the failed AT&T – T-Mobile US merger, bullet point 4. can result in very high cost of spectrum acquisition. Though usually a merger brings with it many beneficial synergies, other than spectrum, that justifies such a merger.

Above Figure provides a historical view on spectrum pricing in US$ per MHz-pop. As we can see, not all spectrum have been borne equal and depending on timing of acquisition, premium might have been paid for some spectrum (e.g., Western European UMTS hyper pricing of 2000 – 2001).

Some general spectrum acquisition heuristics can be derived by above historical overview (see my presentation “Techno-Economical Aspects of Mobile Broadband from 800MHz to 2.6GHz” on @slideshare for more in depth analysis).

Most of the operator cost associated with Spectrum Acquisition, Spectrum Retention and Spectrum Perfection should be more or less included in a Mobile Network Operators Business Plans. Though the demand for more spectrum can be accelerated (1) in highly competitive markets, (2) spectrum starved operations, and/or (3) if customer demand is being poorly managed within the spectral resources available to the MNO.

WiFi, or in general any open radio-access technology operating in ISM bands (i.e., freely available frequency bands such as 2.4GHz, 5.8GHz), can be a source of mitigating costly controlled-spectrum resources by stimulating higher usage of such open-technologies and open-bands.

The cash prevention or cash optimization from open-access technologies and frequency bands should not be under-estimated or forgotten. Even if such open-access deployment models does not make standalone economical sense, is likely to make good sense to use as an integral part for the Next Generation Mobile Data Network perfecting & optimizing open- & controlled radio-access technologies.

The Economics of Spectrum Acquisition, Spectrum Retention & Spectrum Perfection is of such tremendous benefits that it should be on any Operators business plans: short, medium and long-term.

THE ECONOMICS OF SPECTRAL EFFICIENCY

The relative gain in spectral efficiency (as well as other radio performance metrics) with new 3GPP releases has been amazing between R99 and recent HSDPA releases. Lots of progress have been booked on the account of increased receiver and antenna sophistication.

If we compare HSDPA 3.6Mbps (see above Figure) with the first Release of LTE, the spectral efficiency has been improved with a factor 4. Combined with more available bandwidth for LTE, provides an even larger relative boost of supplied bandwidth for increased capacity and customer quality. Do note above relative representation of spectral efficiency gain largely takes away the usual (almost religious) discussions of what is the right spectral efficiency and at what load. The effective (what that may be in your network) spectral efficiency gain moving from one radio-access release or generation to the next would be represented by the above Figure.

Theoretically this is all great! However,

Having the radio-access infrastructure supporting the most spectral efficient technology is the easy part (i.e., thousands of radio nodes), getting your customer base migrated to the most spectral efficient technology is where the challenge starts (i.e., millions of devices).

In other words, to get maximum benefits of a given 3GPP Release gains, an operator needs to migrate his customer-base terminal equipment to that more Efficient Release. This will take time and might be costly, particular if accelerated. Irrespective, migrating a customer base from radio-access A (e.g., GSM) to radio-access B (e.g., LTE), will take time and adhere to normal market dynamics of churn, retention, replacement factors, and gross-adds. The migration to a better radio-access technology can be stimulated by above-market-average acquisition & retention investments and higher-than-market-average terminal equipment subsidies. In the end competitors market reactions to your market actions, will influence the migration time scale very substantially (this is typically under-estimate as competitive driving forces are ignored in most analysis of this problem).

The typical radio-access network modernization cycle has so-far been around 5 years. Modernization is mainly driven by hardware obsolescence and need for more capacity per unit area than older (first & second) generation equipment could provide. The most recent and ongoing modernization cycle combines the need for LTE introduction with 2G and possibly 3G modernization. In some instances retiring relative modern 3G equipment on the expense of getting the latest multi-mode, so-called Single-RAN equipment, deployed, has been assessed to be worth the financial cost of write-off.  This new cycle of infrastructure improvements will in relative terms far exceed past upgrades. Software Definable Radios (SDR) with multi-mode (i.e., 2G, 3G, LTE) capabilities are being deployed in one integrated hardware platform, instead of the older generations that were separated with the associated floor space penalty and operational complexity. In theory only Software Maintenance & simple HW upgrades (i.e., CPU, memory, etc..) would be required to migrate from one radio-access technology to another. Have we seen the last HW modernization cycle? … I doubt it very much! (i.e., we still have Cloud and Virtualization concepts going out to the radio node blurring out the need for own core network).

Multi-mode SDRs should in principle provide a more graceful software-dominated radio-evolution to increasingly more efficient radio access; as cellular networks and customers migrate from HSPA to HSPA+ to LTE and to LTE-advanced. However, in order to enable those spectral-efficient superior radio-access technologies, a Mobile Network Operator will have to follow through with high investments (or incur high incremental operational cost) into vastly improved backhaul-solutions and new antenna capabilities than the past access technologies required.

Whilst the radio access network infrastructure has gotten a lot more efficient from a cash perspective, the peripheral supporting parts (i.e., antenna, backhaul, etc..) has gotten a lot more costly in absolute terms (irrespective of relative cost per Byte might be perfectly OKAY).

Thus most of the economics of spectral efficiency can and will be captured within the modernization cycles and new software releases without much ado. However, backhaul and antenna technology investments and increased operational cost is likely to burden cash in the peak of new equipment (including modernization) deployment. Margin pressure is therefor likely if the Opex of supporting the increased performance is not well managed.

To recapture the most important issues of Spectrum Efficiency Economics:

  • network infrastructure upgrades, from a hardware as well as software perspective, are required => capital investments, though typically result in better Operational cost.
  • optimal customer migration to better and more efficient radio-access technologies => market invest and terminal subsidies.

Boosting spectrum much beyond 6 times today’s mobile data dedicated spectrum position is unlikely to happen within a foreseeable time frame. It is also unlikely to happen in bands that would be very interesting for both providing both excellent depth of coverage and at the same time depth of capacity (i.e., lower frequency bands with lots of bandwidth available). Spectral efficiency will improve with both next generation HSPA+ as well as with LTE and its evolutionary path. However, depending on how we count the relative improvement, it is not going to be sufficient to substantially boost capacity and performance to the level a “1,000 times challenge” would require.

This brings us to the topic of vastly increased cell site density and of course Small Cell Economics.

THE ECONOMICS OF INCREASED CELL SITE DENSITY

It is fairly clear that there will not be a lot new spectrum available in the next 10+ years. The relative increase in cellular bandwidth will come from re-purposing & perfecting existing legacy spectrum (i.e., by re-farming) and acquiring some new bandwidth in the low frequency range (<800MHz) which per definition is not going to provide a lot of bandwidth.  The very high-frequency range (>3GHz) will contain a lot of bandwidth, but is only interesting for Small Cell and Femto-cell like deployments (feeding frenzy for Small Cells).

Financially Mobile Operators in mature markets, such as Western Europe, will be lucky to keep their earning and margins stable over the next 8 – 10 years. Mobile revenues are likely to stagnate and possible even decline. Opex pressure will continue to increase (e.g., just simply from inflationary pressures alone). MNOs are unlikely to increase cell site density, if it leads to incremental cost & cash pressure that cannot be recovered by proportional Topline increases. Therefor it should be clear that adding many more cell sites (being it Macro, Pico, Nano or Femto) to meet increasing (often un-managed & unprofitable) cellular demand is economically unwise and unlikely to happen unless followed by Topline benefits.

Increasing cell density dramatically (i.e., 56 times is dramatic!) to meet cellular data demand will only happen if it can be done with little incremental cost & cash pressure.

I have no doubt that distributing mobile data traffic over more and smaller nodes (i.e., decrease traffic per node) and utilize open-access technologies to manage data traffic loads are likely to mitigate some of the cash and margin pressure from supporting the higher performance radio-access technologies.

So let me emphasize that there will always be situations and geographical localized areas where cell site density will be increased disregarding the economics, in order to increase urgent capacity needs or to provide specialized-coverage needs. If an operator has substantially less spectral overhead (e.g., AT&T) than a competitor (e.g., T-Mobile US), the spectrum-starved operator might decide to densify with Small Cells and/or Distributed Antenna Systems (DAS) to be able to continue providing a competitive level of service (e.g., AT&T’s situation in many of its top markets). Such a spectrum starved operator might even have to rely on massive WiFi deployments to continue to provide a decent level of customer service in extreme hot traffic zones (e.g., Times Square in NYC) and remain competitive as well as having a credible future growth story to tell shareholders.

Spectrum-starved mobile operators will move faster and more aggressively to Small Cell Network solutions including advanced (and not-so-advanced) WiFi solutions. This fast learning-curve might in the longer term make up for a poorer spectrum position.

In the following I will consider Small Cells in the widest sense, including solutions based both on controlled frequency spectrum (e.g., HSPA+, LTE bands) as well in the ISM frequency bands (i.e., 2.4GHz and 5.8GHz). The differences between the various Small Cell options will in general translate into more or less cells due to radio-access link-budget differences.

As I have been involved in many projects over the last couple of years looking at WiFi & Small Cell substitution for macro-cellular coverage, I would like to make clear that in my opinion:

A Small Cells Network is not a good technical (or economical viable) solution for substituting macro-cellular coverage for a mobile network operator.

However, Small Cells however are Great for

  • Specialized coverage solutions difficult to reach & capture with standard macro-cellular means.
  • Localized capacity addition in hot traffic zones.
  • Coverage & capacity underlay when macro-cellular cell split options have been exhausted.

The last point in particular becomes important when mobile traffic exceeds the means for macro-cellular expansion possibilities, i.e., typically urban & dense-urban macro-cellular ranges below 200 meters and in some instances maybe below 500 meters pending on the radio-access choice of the Small Cell solution.

Interference concerns will limit the transmit power and coverage range. However our focus are small localized and tailor-made coverage-capacity solutions, not a substituting macro-cellular coverage, range limitation is of lesser concern.

For great accounts of Small Cell network designs please check out Iris Barcia (@IBTwi) & Simon Chapman (@simonchapman) both from Keima Wireless. I recommend the very insightful presentation from Iris “Radio Challenges and Opportunities for Large Scale Small Cell Deployments” which you can find at “3G & 4G Wireless Blog” by Zahid Ghadialy (@zahidtg, a solid telecom knowledge source for our Industry).

When considering small cell deployment it makes good sense to understand the traffic behavior of your customer base. The Figure below illustrates a typical daily data and voice traffic profile across a (mature) cellular network:

  • up-to 80% of cellular data traffic happens either at home or at work.+

Currently there is an important trend, indicating that the evening cellular-data peak is disappearing coinciding with the WiFi-peak usage taking over the previous cellular peak hour.

A great source of WiFi behavioral data, as it relates to Smartphone usage, you will find in Thomas Wehmeier’s (Principal Analyst, Informa: @Twehmeier) two pivotal white papers on  “Understanding Today’s Smatphone User” Part I and Part II.

The above daily cellular-traffic profile combined with the below Figure on cellular-data usage per customer distributed across network cells

shows us something important when it comes to small cells:

  • Most cellular data traffic (per user) is limited to very few cells.
  • 80% (50%) of the cellular data traffic (per user) is limited to 3 (1) main cells.
  • The higher the cellular data usage (per user) the fewer cells are being used.

It is not only important to understand how data traffic (on a per user) behaves across the cellular network. It is likewise very important to understand how the cellular-data traffic multiplex or aggregate across the cells in the mobile network.

We find in most Western European Mature 3G networks the following trend:

  • 20% of the 3G Cells carries 60+% of the 3G data traffic.
  • 50% of the 3G Cells carriers 95% or more of the 3G data traffic.

Thus relative few cells carries the bulk of the cellular data traffic. Not surprising really as this trend was even more skewed for GSM voice.

The above trends are all good news for Small Cell deployment. It provides confidence that small cells can be effective means to taking traffic away from macro-cellular areas, where there is no longer an option for conventional capacity expansions (i.e., sectorization, additional carrier or conventional cell splits).

For the Mobile Network Operator, Small Cell Economics is a Total Cost of Ownership exercise comparing Small Cell Network Deployment  to other means of adding capacity to the existing mobile network.

The Small Cell Network needs (at least) to be compared to the following alternatives;

  1. Greenfield Macro-cellular solutions (assuming this is feasible).
  2. Overlay (co-locate) on existing network grid.
  3. Sectorization of an existing site solution (i.e., moving from 3 sectors to 3 + n on same site).

Obviously, in the “extreme” cellular-demand limit where non of the above conventional means of providing additional cellular capacity are feasible, Small Cell deployment is the only alternative (besides doing nothing and letting the customer suffer). Irrespective we still need to understand how the economics will work out, as there might be instances where the most reasonable strategy is to let your customer “suffer” best-effort services. This would in particular be the case if there is no real competitive and incremental Topline incentive by adding more capacity.

However,

Competitive circumstances could force some spectrum-starved operators to deploy small cells irrespective of it being financially unfavorable to do so.

Lets begin with the cost structure of a macro-cellular 3G Greenfield Rooftop Site Solution. We take the relevant cost structure of a configuration that we would be most likely to encounter in a Hot Traffic Zone / Metropolitan high-population density area which also is likely to be a candidate area for Small Cell deployment. The Figure below shows the Total Cost of Ownership, broken down in Annualized Capex and Annual Opex, for a Metropolitan 3G macro-cellular rooftop solution:

Note 1: The annualized Capex has been estimated assuming 5 years for RAN Infra, Backaul & Core, and 10 years for Build. It is further assumed that the site is supported by leased-fiber backhaul. Opex is the annual operational expense for maintaining the site solution.

Note 2: Operations Opex category covers Maintenance, Field-Services, Staff cost for Ops, Planning & optimization. The RAN infra Capex category covers: electronics, aggregation, antenna, cabling, installation & commissioning, etc..

Note 3: The above illustrated cost structure reflects what one should expect from a typical European operation. North American or APAC operators will have different cost distributions. Though it is not expected to change conclusions substantially (just redo the math).

When we discuss Small Cell deployment, particular as it relates to WiFi-based small cell deployment, with Infrastructure Suppliers as well as Chip Manufacturers you will get the impression that Small Cell deployment is Almost Free of Capex and Opex; i.e., hardly any build cost, free backhaul and extremely cheap infrastructure supported by no site rental, little maintenance and ultra-low energy consumption.

Obviously if Small Cells cost almost nothing, increasing cell site density with 56 times or more becomes very interesting economics … Unfortunately such ideas are wishful thinking.

For Small Cells not to substantially pressure margins and cash, Small Cell Cost Scaling needs to be very aggressive. If we talk about a 56x increase in cell site density the incremental total cost of ownership should at least be 56 times better than to deploy a macro-cellular expansion. Though let’s not fool ourselves!

No mobile operator would densify their macro cellular network 56 times if absolute cost would proportionally increase!

No Mobile operator would upsize their cellular network in any way unless it is at least margin, cost & cash neutral.

(I have no doubt that out there some are making relative business cases for small cells comparing an equivalent macro-cellular expansion versus deploying Small Cells and coming up with great cases … This would be silly of course, not that this have ever prevented such cases to be made and presented to Boards and CxOs).

The most problematic cost areas from a scaling perspective (relative to a macro-cellular Greenfield Site) are (a) Site Rental (lamp posts, shopping malls,), (b) Backhaul Cost (if relying on Cable, xDSL or Fiber connectivity), (c) Operational Cost (complexity in numbers, safety & security) and (d) Site Build Cost (legal requirements, safety & security,..).

In most realistic cases (I have seen) we will find a 1:12 to 1:20 Total Cost of Ownership difference between a Small Cell unit cost and that of a Macro-Cellular Rooftop’s unit cost. While unit Capex can be reduced very substantially, the Operational Expense scaling is a lot harder to get down to the level required for very extensive Small Cell deployments.

EXAMPLE:

For a typical metropolitan rooftop (in Western Europe) we have the annualized capital expense (Capex) of ca. 15,000 Euro and operational expenses (Opex) in the order of 30,000 Euro per annum. The site-related Opex distribution would look something like this;

  • Macro-cellular Rooftop 3G Site Unit Annual Opex:
  • Site lease would be ca. 10,500EUR.
  • Backhaul would be ca. 9,000EUR.
  • Energy would be ca. 3,000EUR.
  • Operations would be ca. 7,500EUR.
  • i.e., total unit Opex of 30,000EUR (for average major metropolitan area)

Assuming that all cost categories could be scaled back with a factor 56 (note: very big assumption that all cost elements can be scaled back with same factor!)

  • Target Unit Annual Opex cost for a Small Cell:
  • Site lease should be less than 200EUR (lamp post leases substantially higher)
  • Backhaul should be  less than 150EUR (doable though not for carrier grade QoS).
  • Energy should be less than 50EUR (very challenging for todays electronics)
  • Operations should be less than 150EUR (ca. 1 hour FTE per year … challenging).
  • Annual unit Opex should be less than 550EUR (not very likely to be realizable).

Similar for the Small Cell unit Capital expense (Capex) would need to be done for less than 270EUR to be fully scalable with a macro-cellular rooftop (i.e., based on 56 times scaling).

  • Target Unit Annualized Capex cost for a Small Cell:
  • RAN Infra should be less than 100EUR (Simple WiFi maybe doable, Cellular challenging)
  • Backhaul would be less than 50EUR (simple router/switch/microwave maybe doable).
  • Build would be less than 100EUR (very challenging even to cover labor).
  • Core would be less than 20EUR (doable at scale).
  • Annualized Capex should be less than 270EUR (very challenging to meet this target)
  • Note: annualization factor: 5 years for all including Build.

So we have a Total Cost of Ownership TARGET for a Small Cell of ca. 800EUR

Inspecting the various capital as well as operational expense categories illustrates the huge challenge to be TCO comparable to a macro-cellular urban/dense-urban 3G-site configuration.

Massive Small Cell Deployment needs to be almost without incremental cost to the Mobile Network Operator to be a reasonable scenario for the 1,000 times challenge.

Most the analysis I have seen, as well as carried out myself, on real cost structure and aggressive pricing & solution designs shows that the if the Small Cell Network can be kept between 12 to 20 Cells (or Nodes) the TCO compares favorably to (i.e., beating) an equivalent macro-cellular solution. If the Mobile Operator is also a Fixed Broadband Operator (or have favorable partnership with one) there are in general better cost scaling possible than above would assume (e.g., another AT&T advantage in their DAS / Small Cell strategy).

In realistic costing scenarios so far, Small Cell economical boundaries are given by the Figure below:

Let me emphasize that above obviously assumes that an operator have a choice between deploying a Small Cell Network and conventional Cell Split, Nodal Overlay (or co-location on existing cellular site) or Sectorization (if spectral capacity allows). In the Future and in Hot Traffic Zones this might not be the case. Leaving Small Cell Network deployment or letting the customers “suffer” poorer QoS be the only options left to the mobile network operator.

So how can we (i.e., the Mobile Operator) improve the Economics of Small Cell deployment?

Having access fixed broadband such as fiber or high-quality cable infrastructure would make the backhaul scaling a lot better. Being a mobile and fixed broadband provider does become very advantageous for Small Cell Network Economics. However, the site lease (and maintenance) scaling remains a problem as lampposts or other interesting Small Cell locations might not scale very aggressively (e.g., there are examples of lamppost leases being as expensive as regular rooftop locations). From a capital investment point of view, I have my doubts whether the price will scale downwards as favorable as they would need to be. Much of the capacity gain comes from very sophisticated antenna configurations that is difficult to see being extremely cheap:

Small Cell Equipment Suppliers would need to provide a Carrier-grade solution priced at  maximum 1,000EUR all included! to have a fighting chance of making massive small cell network deployment really economical.

We could assume that most of the “Small Cells” are in fact customers existing private access points (or our customers employers access points) and simply push (almost) all cellular data traffic onto those whenever a customer is in vicinity of such. All those existing and future private access points are (at least in Western Europe) connected to at least fairly good quality fixed backhaul in the form of VDSL, Cable (DOCSIS3), and eventually Fiber. This would obviously improve the TCO of “Small Cells” tremendously … Right?

Well it would reduce the MNOs TCO (as it shift the cost burden to the operator’s customer or employers of those customers) …Well … This picture also would  not really be Small Cells in the sense of proper designed and integrated cells in the Cellular sense of the word, providing the operator end-2-end control of his customers service experience. In fact taking the above scenario to the extreme we might not need Small Cells at all, in the Cellular sense, or at least dramatically less than using the standard cellular capacity formula above.

In Qualcomm (as well as many infrastructure suppliers) ultimate vision the 1,000x challenge is solved by moving towards a super-heterogeneous network that consist of everything from Cellular Small Cells, Public & Private WiFi access points as well as Femto cells thrown into the equation as well.

Such an ultimate picture might indeed make the Small Cell challenge economically feasible. However, it does very fundamentally change the current operational MNO business model and it is not clear that transition comes without cost and only benefits.

Last but not least it is pretty clear than instead of 3 – 5 MNOs all going out plastering walls and lampposts with Small Cell Nodes & Antennas sharing might be an incredible clever idea. In fact I would not be altogether surprised if we will see new independent business models providing Shared Small Cell solutions for incumbent Mobile Network Operators.

Before closing the Blog, I do find it instructive to pause and reflect on lessons from Japan’s massive WiFi deployment. It might serves as a lesson to massive Small Cell Network deployment as well and an indication that collaboration might be a lot smarter than competition when it comes to such deployment:

The Thousand Times Challenge: PART 2 … How to provide cellular data capacity?

CELLULAR DATA CAPACITY … A THOUSAND TIMES CHALLENGE?

It should be obvious that I am somewhat skeptical about all the excitement around cellular data growth rates and whether its a 1,000x or 250x or 42x (see my blog on “The Thousand Times Challenge … The answer to everything about mobile data?”). In this I share very much Dean Bubley’s (Disruptive Wireless) critical view on the “cellular growth rate craze”. See Dean’s account in his recent Blog “Mobile data traffic growth – a thought experiment and forecast”.

This obsession with cellular data growth rates is Largely Irrelevant or only serves Hysteria and Cool Blogs, Twittter and Press Headlines (which is for nothing else occasionally entertaining).

What IS Important! is how to provide more (economical) cellular capacity, avoiding;

  • Massive Congestion and loss of customer service.
  • Economical devastation as operator tries to supply network resources for an un-managed cellular growth profile.

(Source: adapted from K.K. Larsen “Spectrum Limitations Migrating to LTE … a Growth Market Dilemma?“)

To me the discussion of how to Increase Network Capacity with a factor THOUSAND is an altogether more interesting discussion than what the cellular growth rate might or might not be in 2020 (or any other arbitrary chosen year).

Mallinson article “The 2020 Vision for LTE”  in FierceWirelessEurope gives a good summary of this effort. Though my favorite account on how to increase network capacity focusing on small cell deployment  is from Iris Barcia (@ibtwi) & Simon Chapman (@simonchapman) from Keima Wireless.

So how can we simply describe cellular network capacity?

Well … it turns out that Cellular Network Capacity can be described by 3 major components; (1) available bandwidth B, (2) (effective) spectral efficiency and (3) number of cells deployed N.

The SUPPLIED NETWORK CAPACITY in Mbps (i.e., C) is equal to  the AMOUNT OF SPECTRUM, i.e., available bandwidth, in MHz (i..e, B) multiplied with the  SPECTRAL EFFICIENCY PER CELL in Mbps/MHz (i.e., E) multiplied by the NUMBER OF CELLS (i.e., N).

It should be understood that the best approach is to apply the formula on a per radio access technology basis, rather than across all access technologies. Also separate the analysis in Downlink capacity (i.e., from Base Station to Customer Device) and in Uplink (from consumer Device to Base Station). If averages across many access technologies or you are considering the total bandwidth B including spectrum both for Uplink and for Downlink, the spectral efficiency needs to be averaged accordingly. Also bear in mind that there could be some inter-dependency between the (effective) spectral efficiency and number cells deployed. Though it  depends on what approach you choose to take to Spectral Efficiency.

It should be remembered that not all supplied capacity is being equally utilized. Most operators have 95% of their cellular traffic confined to 50% of less of their Cells. So supplied capacity in half (or more) of most cellular operator’s network remains substantially under-utilized (i.e., 50% or more of radio network carries 5% or less of the cellular traffic … if you thought that Network Sharing would make sense … yeah it does … but its a different story;-).

Therefore I prefer to apply the cellular capacity formula to geographical limited areas of the mobile network, rather than network wide. This allows for more meaningful analysis and should avoid silly averaging effects.

So we see that providing network capacity is “pretty easy”: The more bandwidth or available spectrum we have the more cellular capacity can be provided. The better and more efficient air-interface technology the more cellular capacity and quality can we provide to our customers. Last (but not least) the more cells we have build into our mobile network the more capacity can be provided (though economics does limit this one).

The Cellular Network Capacity formula allow us to breakdown the important factors to solve the “1,000x Challenge”, which we should remember is based on a year 2010 reference (i.e., feels a little bit like cheating! right?;-) …

The Cellular Capacity Gain formula:

Basically the Cellular Network Capacity Gain in 2020 (over 2010) or the Capacity we can supply in 2020 is related to how much spectrum we have available (compared to today or 2010), the effective spectral efficiency relative improvement over today (or 2010) and the number of cells deployed in 2020 relative to today (or 2010).

According with Mallinson’s article the “1,000x Challenge” looks the following (courtesy of SK Telekom);

According with Mallinson (and SK Telekom, see “Efficient Spectrum Resource Usage for Next Generation NW” by H. Park, presented at 3GPP Workshop “on Rel.-12 and onwards”, Ljubljana, Slovenia, 11-12 June 2012) one should expect to have 3 times more spectrum available in 2020 (compared to 2010 for Cellular Data), 6 times more efficient access technology (compared to what was available in 2010) and 56 times higher cell density compared to 2010. Another important thing to remember when digesting the 3 x 6 x 56 is: this is an estimate from South Korea and SK Telekom and to a large extend driven by South Korean conditions.

Above I have emphasized the 2010 reference. It is important to remember this reference to better appreciate where the high ratios come from in the above. For example in 2010 most mobile operators where using 1 to maximum 2 carriers or in the process to upgrade to 2 carriers to credible support HSPA+. Further many operators had not transitioned to HSPA+ and few not even added HSUPA to their access layer. Furthermore, most Western European operators had on average 2 carriers for UMTS (i.e., 2×10 MHz @ 2100MHz). Some operators with a little excess 900MHz may have deployed a single carrier and either postponed 2100MHz or only very lightly deployed the higher frequency UMTS carrier in their top cities. In 2010, the 3G population coverage (defined as having minimum HSDPA) was in Western Europe at maximum 80% and in Central Eastern & Southern Europe most places maximum 60%. 3G geographical coverage always on average across the European Union was in 2010 less than 60% (in Western Europe up-to 80% and in CEE up-to 50%).

OPERATOR EXAMPLE:

Take an European Operator with 4,000 site locations in 2010.

In 2010 this operator had deployed 3 carriers supporting HSPA @ 2100MHz (i..e, total bandwidth of 2x15MHz)

Further in 2010 the Operator also had:

  • 2×10 MHz GSM @ 900MHz (with possible migration path to UMTS900).
  • 2×30 MHz GSM @ 1800MHz (with possible migration path to LTE1800).

By 2020 it retained all its spectrum and gained

  • 2×10 MHz @ 800MHz for LTE.
  • 2×20 MHz @ 2.6GHz for LTE.

For simplicity (and idealistic reasons) let’s assume that by 2020 2G has finally been retired. Moreover, lets concern ourselves with cellular data at 3G and above service levels (i.e., ignoring GPRS & EDGE). Thus I do not distinguish between whether the air-interface is HSPA+ or LTE/LTE advanced.

OPERATOR EXAMPLE: BANDWIDTH GAIN 2010 – 2020:

The Bandwidth Gain part of the “Cellular Capacity Gain” formula is in general specific to individual operators and the particular future regulatory environment (i.e., in terms of new spectrum being released for cellular use). One should not expect a universally applicable ratio here. It will vary with a given operator’s spectrum position … Past, Present & Future.

In 2010 our Operator had 15MHz (for either DL or UL) supporting cellular data.

In 2020 the Operator should have 85MHz (for either DL or UL), which is a almost a factor 6 more than in 2010. Don’t be concerned about this not being 3! After all why should it be? Every country and operator will face different constraints and opportunities and therefor there is no reason why 3 x 6 x 56 would be a universal truth!

If Regulator’s and Lawmakers would be more friendly towards spectrum sharing the boost of available spectrum for cellular data could be a lot more.

SPECTRAL EFFICIENCY GAIN 2010 – 2020:

The Spectral Efficiency Gain part of the “Cellular Capacity Gain” formula is more universally applicable to cellular operators at the same technology stage and with a similar customer mix. Thus in general for apples and apple comparison more or less same gains should be expected.

In my experience Spectral Efficiency almost always gets experts emotions running high. More often than not there is a divide between those experts (across Operators, Suppliers, etc.) towards what would be an appropriate spectral efficiency to use in capacity assessments. Clearly everybody understands that the theoretical peak spectral efficiency is not reflecting the real service experience of customers or the amount of capacity an operator has in his Mobile Network. Thus, in general an effective (or average) spectral efficiency is being applied often based on real network measurements or estimates based on such.

When LTE was initially specified its performance targets was referenced to HSxPA Release 6. The LTE aim was to get 3 -4 times the DL spectral efficiency and 2 – 3 times the UL spectral efficiency. LTE advanced targets to double the peak spectral efficiency for both DL and UL.

At maximum expect the spectral efficiency to be:

  • @Downlink to be 6 – 8 times that of Release 6.
  • @Uplink to be 4 – 6 times that of Release 6.

Note that this comparison is assuming an operator’s LTE deployment would move 4×4 MiMo to 8×8 MiMo in Downlink and from 64QAM SiSo to 4×4 MiMo in Uplink. Thus a quantum leap in antenna technology and substantial antenna upgrades over the period from LTE to LTE-advanced would be on the to-do list of the mobile operators.

In theory for LTE-advanced (and depending on the 2010 starting point) one could expect a factor 6 boost in spectral efficiency  by 2020 compared to 2010, as put down in the “1,000x challenge”.

However, it is highly unlikely that all devices by 2020 would be LTE-advanced. Most markets would be have at least 40% 3G penetration, some laggard markets would still have a very substantial 2G base. While LTE would be growing rapidly the share of LTE-advanced terminals might be fairly low even at 2020.

Using a x6 spectral efficiency factor by 2020 is likely being extremely optimistic.

A more realistic assessment would be a factor 3 – 4 by 2020 considering the blend of technologies in play at that time.

INTERLUDE

The critical observer sees that we have reached a capacity gain (compared to 2010) of 6 x (3-4) or 18 to 24 times. Thus to reach 1,000x we still need between 40 and 56 times the cell density.

and that translate into a lot of additional cells!

CELL DENSITY GAIN 2010 – 2020:

The Cell Density Gain part of the “Cellular Capacity Gain” formula is in general specific to individual operators and the cellular traffic demand they might experience, i.e., there is no unique universal number to be expected here.

So to get to 1,000x the capacity of 2010 we need either magic or a 50+x increase in cell density (which some may argue would amount to magic as well) …

Obviously … this sounds like a real challenge … getting more spectrum and high spectral efficiency is piece of cake compared to a 50+ times more cell density. Clearly our Mobile Operator would go broke if it would be required to finance 50 x 4000 = 200,000 sites (or cells, i.e., 3 cells = 1 macro site ). The Opex and Capex requirements would simply NOT BE PERMISSIBLE.

50+ times site density on a macro scale is Economical & Practical Nonsense … The Cellular Network Capacity heuristics in such a limit works ONLY for localized areas of a Mobile Network!

The good news is that such macro level densification would also not be required … this is where Small Cells enter the Scene. This is where you run to experts such as Simon Chapman (@simonchapman) from Keima Wireless or similar companies specialized in providing intelligent small cell deployment. Its clear that this is better done early on in the network design rather than when the capacity pressure becomes a real problem.

Note that I am currently assuming that Economics and Deployment Complexity will not become challenging with Small Cell deployment strategy … this (as we shall see) is not necessarily a reasonable assumption in all deployment scenarios.

Traffic is not equally distributed across a mobile network as the chart below clearly shows (see also Kim K Larsen’s “Capacity Planning in Mobile Data Networks Experiencing Exponential Growh in Demand”):

20% of the 3G-cells carries 60% of the data traffic and 50% of the 3G-cells carries as much as 95% of the 3G traffic.

Good news is that I might not need to worry too much about half of my cellular network that only carries 5% of my traffic.

Bad news is that up-to 50% of my cells might actually give me a substantial headache if I don’t have sufficient spectral capacity and enough customers on the most efficient access technology. Leaving me little choice but to increase my cellular network density, i.e., build more cells to my existing cellular grid.

Further, most of the data traffic is carried within the densest macro-cellular network grid (at least if an operator starts exhausting its spectral capacity with a traditional coverage grid). In a typical European City ca. 20% of Macro Cells will have a range of 300 meter or less and 50% of the Macro Cells will have a range of 500 meter or less (see below chart on “Cell ranges in a typical European City”).

Finding suitable and permissible candidates for Macro cellular cell splits below 300 meter is rather unlikely.  Between 300 and 500 meter there might still be macro cellular split optionallity and if so would make the most sense to commence on (pending on future anticipated traffic growth). Above 500 meter its usually fairly likely to find suitable macro cellular site candidates (i.e., in most European Cities).

Clearly if the cellular data traffic increase would require a densification ratio of 50+ times current macro-cellular density a macro cellular alternative might be out of the question even for cell ranges up-to 2 km.

A new cellular network paradigm is required as the classical cellular network design brakes down!

Small Cell implementation is often the only alternative a Mobile Operator has to provide more capacity in a dense urban or high-traffic urban environment.

As Mobile Operators changes their cellular design, in dense urban and urban environments, to respond to the increasing cellular data demand, what kind of economical boundaries would need to be imposed to make a factor 50x increase in cell density work out.

No Mobile Operator can afford to see its Opex and Capex pressure rise! (i.e., unless revenue follows or exceed which might not be that likely).

For a moment … remember that this site density challenge is not limited to a single mobile operator … imagining that all operators (i.e., typical 3 -5 except for India with 13+;-) in a given market needs to increase their cellular site density with a factor 50. Even if there is (in theory) lots of space on the street level for Small Cells … one could imagine the regulatory resistance (not to mention consumer resistance) if a city would see a demand for Small Cell locations increase with a factor 150 – 200.

Thus, Sharing Small Cell Locations and Supporting Infrastructure will become an important trend … which should also lead to Better Economics.

This bring us to The Economics of the “1,000x Challenge” … Stay tuned!

The Thousand Times Challenge: PART 1 … The answer to everything about mobile data?

This is not PART 2 of “Mobile Data Growth…The Perfect Storm” … This is the story of the Thousand Times Challenge!

It is not unthinkable that some mobile operators will face very substantial problems with their cellular data networks due to rapid, uncontrollable or un-managed cellular data growth. Once cellular data demand exceeds the installed base supply of network resources, the customer experience will likely suffer and cellular data consumers will no longer get the same service level that they had prior to the onset of over-demand.

One might of course argue that consumers were (and in some instances still are) spoiled during the period when mobile operators had plenty of spectral capacity available (relative to their active customer base) with unlimited data plans and very little cellular network load . As more and more customers migrate to smartphones and 3G data services, it follows naturally that there will be increasingly less spectral resources available per customer.

The above chart (from “Capacity Planning in Mobile Data Networks Experience Exponential Growth in Demand” illustrates such a situation where customers cellular data demand eventually exceeds the network capacity … which leads to a congested situation and less network resources per customer.

A mobile operator have several options that can mitigate emergence of capacity and spectrum crunch:

  1. Keep expand and densify the cellular network.
  2. Free up legacy (i.e. “old-technology”) spectrum and deploy for technology facing demand pressure.
  3. Introduce policy and active demand management on per user / segment level.
  4. Allow customers service to degrade as provider of best-effort cellular data.
  5. Stimulate and design for structural off-loading (levering fixed as well as cellular networks).
  6. etc..

DEMAND … A THOUSAND TIMES FABLE?

Let me start with saying that cellular data growth does pose a formidable challenge for many mobile operators … already today … its easy to show that even at modest growth rates cellular data demand gets pretty close or beyond cellular network resources available today and in the future. Unless we fundamentally changes the way we design, plan and build networks.

However, Today The Challenge is Not network wide … At present, its limited to particular areas of the cellular networks … though as the cellular data traffic growths, the demand challenge does spread outwards and addresses an ever higher share of the cellular network.

Lately 1,000 has become a very important number. It has become the answer to the Smartphone Challenge and exponential growth of mobile data. 1000 seems to represent both demand as well as supply. Qualcomm has made it their “mission in life” (at at least the next 8 years) to solve the magic 1000 challenge. Mallinson article “The 2020 Vision for LTE”  in FierceWirelessEurope gives a slightly more balanced view on demand and target supply of cellular resources: “Virtually all commentators expect a 15 to 30-fold traffic increase over five years and several expect this growth trend to last a decade to 2020, representing a 250-1,000-fold increase.” (note: the cynic in wonders about the several, its more than 2, but is it much more than 3?)

The observant reader will see that the range between minimum and maximum to be a factor of 4 … a reasonably larger error of margin to plan for. If by 2020 the demand would be 1,000 times that of demand in 2010, our Technologies better be a lot better than that as that would be an average with a long tail.

Of course most of us know that the answer really is 42! NOT 1000!

Joke aside … And let’s get serious about this 1000 Fable!

Firstly, 1,000 is (according with Qualcomm) the expected growth of data between 2010 and 2020 … Thus if data was 42 in 2010 it would be 1000×42 by 2020. That would be a CAGR of 100% over the period or a doubling of demanded data year in year our for 10 years.

… Well not really!

Qualcomm states that data demand in 2012 would be 10x that of 2010 . Thus, it follows that data demand between 2012 and 2020 “only” would be 100x or a CAGR of 78% over that period.

So in 2021 (1 year after we had 1,000x) we would see demand of ca. 1,800x, in 2022 (2 years after we solved the 1000x challenge) we would experience a demand of more than 3,000x, and so forth …

So great to solve the 1,000x challenge by 2020 but it’s going to be like “peeing in your trouser on a cold winter day” . Yes it will be warm, for a little while. Then its going to be really cold. In other words not going to help much structurally.

Could it be that this 1,000x challenge might be somewhat flawed?

  1. If All Commentators and Several Experts are to be believed, the growth worldwide is almost perfectly exponential with an annual growth rate between 70% and 100%.
  2. Growth is “unstoppable” -> unlimited sources for growth.

Actually most projections (from several expert sources;-) that I have seen does show substantial deceleration as the main source for growth exhaust, i.e., as Early & Late Majority of customers adapt to mobile data. Even Cisco own “Global Mobile Data Traffic Forecast Update, 2011 – 2016” shows an average deceleration of growth with an average of 20% per anno between 2010 and their 2014 projections (note: it’s sort of “funny” that Cisco then decide that after 2014 growth no longer slows down but stays put at 78% … alas artistic freedom I suppose?).

CELLULAR CUSTOMER MIGRATION

The following provides projection of 2G, 3G and LTE uptake between 2010 (Actual) and 2020 (Expected). The dynamics is based on latest Pyramid Research cellular projections for WEU, US, APAC, LA & CEE between 2010 to 2017. The “Last Mile”, 2018 – 2020, is based on reasonable dynamic extrapolations based on the prior period with a stronger imposed emphasis on LTE growth. Of course Pyramid Research provides one view of the technology migration and given the uncertainty on market dynamics and pricing policies are simply one view on how the cellular telco world will develop. This said, I tend to find Pyramid Research getting reasonably close to actual developments and the trends across the various markets are not that counter-intuitive.

For the US Market LTE is expected to grow very fast and  reach a penetration level beyond 60% by 2020. For the other markets LTE is expected to evolve relative sluggish with an uptake percentage of 20%+/-5% by 2020. It should be remembered that all projections are averages. Thus within a market, for a specific country or operator, the technology shares could very well differ somewhat from the above.

The growth rates for LTE customer uptake over the period; 2010/2011 – 2020, 2015 – 2020 and respective LTE share in 2020.

WEU 2010-2020: 87%, 2015 – 2020: 24%, share in 2020: 20%.

USA 2010-2020: 48%, 2015 – 2020: 19%, share in 2020: 62%.

APAC 2010-2020: 118%, 2015 – 2020: 61%, share in 2020: 30%.

CEE 2011-2020: 168%, 2015 – 2020: 37%, share in 2020: 20%.

LA 2010-2020: 144%, 2015 – 2020: 37%, share in 2020: 40%.

Yes the LTE growth rates are very impressive when compared to the initial launch year with the very initial uptake. As already pointed out in my Blog …. growth rates in referenced back to a penetration less than 2% has little practical meaning. The average LTE uptake rate across all the above markets between 2012 to 2020 is 53%+/-17% (highest being APAC and Lowest being USA).

What should be evident from the above technology uptake charts are that

  • 3G remains strong even in 2020 (though likely dominated by prepaid at that time).
  • 2G will remain for a longtime in both CEE & APAC, even toward 2020.

In the scenario where we have a factor 100 in growth of usage between 2012 and 2020, which is a CAGR of 78%, the growth of usage per user would to be 16% pa at an annual uptake rate of 53%. However, without knowing the starting point of the LTE data usage (which initially will be very low as there is almost not users), these growth rates are not of much use and certainly cannot be used to make up any conclusions about congestion or network dire straits.

Example based on European Growth Figures:

A cellular networks have 5 mio customers, 50% Postpaid.

Network has 4,000 cell sites (12,000 sectors) that by 2020 covers both UMTS & LTE to the same depth.

in 2020 the operator have allocated 2×20 MHz for 3G & 2×20 MHz for LTE. Remaining 2G customers are one a single shared GSM network support all GSM traffic in country with no more than 2x5MHz.

By 2020 the cellular operator have ca. 4Mio 3G users and ca. 0.9Mio LTE users (remaining 100 thousand GSM customers are the real Laggards).

The 3G uptake growth rate ‘2010 – ‘2020 was 7%, between ’10 – ’12 it was 25%. 3G usage growth would not be very strong as its a blend of Late Majority and Laggards (including a fairly large Prepaid segment that appear hardly to use Cellular data).

The LTE uptake growth rate ‘2010 – ‘2020 was 87%, between ’10 – ’12 it was 458%. The first 20% of LTE would like be consisting of Innovators and Early Adopters. Thus, usage growth of LTE should be expected to be more aggressive than for 3G.

Let’s assume that 20% of the cell sites carries 50% of the devices and for simplicity also data traffic (see for example my Slideshare presentation “Capacity Planning in Mobile Data Networks Experiencing Exponential Growth in Demand” which provides evidence for such distribution).

So we have ca. 800 3G users per sector (or ca. 40 3G users per sector per MHz). By 2020, one would likewise for LTE anticipate ca. 200 LTE users per sector (or ca. 10 LTE users per sector per MHz). Note that no assumptions of activity rate has been imposed.

Irrespective of growth rate we need to ask ourselves whether 10 LTE users per sector per MHz would pose a congested situation (in the busy hour). Assume that the effective LTE spectral efficiency across a macro cellular cell would be 5Mbps/MHz/Sector. So the 10 LTE users could on average share up-to 100Mbps (@ 20MHz DL).

For 3G, where we would have 40 3G users per sector per MHz. Similar (very simple) considerations allows to conclude that the 40 4G users would have no more than 40Mbps (under semi-ideal radio conditions and @ 20MHz DL). This could be a lot more demanding and customer affecting than the resulting LTE demand, despite LTE having substantially higher growth rate than we saw for 3G over the same period.

High growth rates does not default result in cellular network breakdown. It is the absolute traffic load (in the Busy Hour) that matters.

The growth of of cellular data usage between 2010 and 2020 is likewise going to be awesome (it would be higher than above technology uptake rates).. but also pretty meaningless.

Growth rates only matter in as much as growth brings an absolute demanded traffic level above the capability of the existing network and spectral resources (supplied traffic capacity).

Irrespective of a growth rate is high, medium or low … all can cause havoc in a cellular network … some networks will handle a 1,000x without much ado, others will tumble at 250x whatever the reference point level (which also includes the network design and planning maturity levels).

However, what is important is how to provide more (economical) cellular capacity avoiding;

  • Massive Congestion and loss of customer service.
  • Economical devastation as operator tries to supply network resources for an un-managed cellular growth profile.

(Source: adapted from K.K. Larsen “Spectrum Limitations Migrating to LTE … a Growth Market Dilemma?“)

Facebook Values … Has the little boy spoken?

Facebook has lost ca. 450+ Million US$ per day since its IPO … or about 40 Billion US$ … in a little under 90 days (i.e., reference date 17-08-2012).

This is like loosing an Economy such as the Seychelles every second day. Or a Bulgaria  in less than 90 days. (Note: this is not to say that you could buy Bulgaria for $40B … well who knows? 😉 … the comparison just serves at making the loss of Facebook value more tangible. Further one should not take the suggestion of a relationship between market value of a corporation such as Facebook with GDP of country too serious as also pointed out by Dean Bubley @disruptivedean).

That’s a lot of value lost in a very short time. I am sure Bulgarians,”Seychellians” and FB investors can agree to that.

40 Billion US Dollar?  … Its a little less than 20 Mars Missions … or

40 Billion US Dollar could keep 35 thousand Americans in work for 50 years each!

So has the little boy spoken? Is the Emperor of Social Media Naked?

Illustration: THORARINN LEIFSSON http://www.totil.com

Let’s have a more detailed look at Facebook’s share price development since May 18th 2012.

The Chart below shows the Facebook’s share price journey, the associated book value, the corresponding sustainable share of Online Ad Spend (with an assumed 5yr linear ramp-up from today’s share) and the projected share of Online Ad Spend in 2012.

In the wisdom of looking backwards …  is Facebook, the Super-Mario of Social Media, really such a bad investment? or is this just a bump in a long an prosperous road ahead?

I guess it all rise and fall with what ever belief an investor have of Facebook’s ability to capture sufficient Online Advertisement Spend. Online Ad spend obviously includes the Holy Grail of Mobile Ad Revenues as well.

FB’s revenue share of Online Ad Spend has raised steady from 1.3% in 2009 to ca. 5% in2011 and projected to be at least 6% in 2012.

Take a look at FB’s valuation (or book value) which at the time of the IPO (i.e., May 18th 2012) was ca. 80+ Billion US Dollars. Equivalent to a share price of $38.32 per share (at closing).

In terms of sustainable business such a valuation could be justifiable if FB could capture and sustain at least 23% of the Online Ad Spend in the longer run. Compare this with ca. 5% in 2011. Compare this with Googles 40+% om 2011. AOL, which is Top 5 of best companies at conquering Online Advertisement Spend, share of Online Ad Spend was a factor 15 less than Google. Furthermore, Top-5 accounts for more than 70% of the Online Ad Spend in 2011. The remaining 30% of Online Ad Spend arises mainly from Asia Pacific logo-graphic, politically complicated, and Cyrillic dominated countries of which Latin-based Social Media & Search in general perform poorly in (i.e., when it comes to capturing Online Ad Spend).

Don’t worry! Facebook is in the Top 5 list of companies getting a piece of the Online Advertisement pie.

It would appear likely that Facebook should be able to continue to increase its share of Online Ad Spend from today’s fairly low level. The above chart shows FB’s current share price level (closing 17-August-2012) corresponds to a book value of ca. $40 Billion and a sustainable share of the Online Ad Spend of a bit more than 10+%.

It would be sad if Facebook should not be able to ever get more than 10% of the Online Ad Spend.

From this perspective:

A Facebook share price below $20 does seem awfully cheap!

Is it time to invest in Facebook? … at the moment it looks like The New Black is bashing Social Media!

So the share price of Facebook might drop further … as current investors try too off-load their shares (at least the ones that did not buy at and immediately after the IPO).

Facebook has 900+ Million (and approaching a Billion) users. More than 500+ Million of those 900+ Million Facebook users are active daily and massively using their Smartphones to keep updated with Friends and Fiends. In 2011 there where more than 215 Billion FB events.

Facebook should be a power house for Earned and Owned Social Media Ads (sorry this is really still Online Advertisement despite the Social Media tag) … we consumers are much more susceptible to friend’s endorsements or our favorite brands (for that matter) than the mass fabricated plain old online  advertisement that most of us are blind to anyway (or get annoyed by which from awareness is not necessarily un-intended ).

All in all

Maybe the Little Boy will not speak up as the Emperor is far from naked!

METHODOLOGY

See my Social Media Valuation Blog “A walk on the Wild Side”.

Following has been assumed in FB Valuation Assessment:

  1. WACC 9.4%
  2. 2012 FB capture 6% of total online ad spend.
  3. FB gains a sustainable share of online ad spend X%.
  4. 5 yr linear ramp-up from 2012 6% to X%, and then maintained.
  5. Other revenues 15% in 2012, linearly reduced to 10% after 5 yrs and then maintained.
  6. Assume FB can maintain a free cash flow yield of 25%.

Mobile Data Growth … The Perfect Storm? (PART 1)

The Perfect Mobile Data StormSmartphone Challenge and by that the Signalling Storm

Mobile Operators hit by the Mobile Data Tsunami … tumbling over mobile networks … leading to

Spectrum Exhaustion

and

Cash Crunch

and

Financial disaster (as cost of providing mobile data exceeds the revenues earned from mobile data).

as Mobile Operators tries to cope with hyper-inflationary growth of data usage.

Will LTE be ready in time?

Will LTE be sufficient remedying the mobile data growth observed the last couple of years?

The Mobile Industry would have been better off if Data Consumption had stayed “Fixed”? Right! …Right?

At this time my Twitter Colleague Dean Bubley (@Disruptivedean) will be near critical meltdown 😉 …

Dean Bubley (Disruptive Wireless) is deeply skeptical about the rhetoric around the mobile data explosion and tsunamis, as he has accounted for in a recent Blog “Mobile data traffic growth – a thought experiment and forecast”. Dean hints at possible ulterior motives behind the dark dark picture of the mobile data future painted by the Mobile Industry.

I do not share Dean’s opinion (re:ulterior motives in particular, most of his other thoughts on cellular data growth are pretty OK!). It almost suggest a Grand Mobile Industry Conspiracy in play … Giving the Telco Industry a little too much credit … Rather than the simple fact that we as an industry (in particular the Marketing side of things) tends to be govern by the short term. Being “slaves of anchoring bias” to the most recent information available to us (i.e, rarely more than the last 12 or so month).

Of course Technology Departments in the Mobile Industry uses the hyper-growth of Cellular Data to get as much Capex as possible. Ensure sufficient capacity overhead can be bought and build into the Mobile Networks, mitigating the uncertainty and complexity of Cellular data growth.

Cellular Data is by its very nature a lot more difficult to forecast and plan for than the plain old voice service.

The Mobile Industry appears to suffer from Mobile Data AuctusphopiaThe Fear of Growth (which is sort of “funny” as the first ca. 4 – 5 years of UMTS, we all were looking for growth of data, and of course the associated data revenues, that would make our extremely expensive 3G spectrum a somewhat more reasonable investment … ).

The Mobile Industry got what it wished for with the emergence of the Smartphone (Thanks Steve!).

Why Data Auctusphopia? … ?

Let’s assume that an operator experienced a Smartphone growth rate of 100+% over the last 12 month. In addition, the operator also observes the total mobile data volume demand growing with 250+% (i.e., not uncommon annual growth rates between 2010 and 2011). Its very tempting (i.e., it is also likely to be very wrong!) to use the historical growth rate going forward without much consideration for the underlying growth dynamics of technology uptake, migration and usage-per-user dynamics. Clearly one would be rather naive NOT to be scared about the consequences of a sustainable annual growth rate of 250%! (irrespective of such thinking being flawed).

Problem with this (naive) “forecasting” approach is that anchoring on the past is NOT likely to be a very good predictor for longer/long term expectations.

THE GROWTH ESSENTIALS – THE TECHNOLOGY ADAPTATION.

To understand mobile data growth, we need to look at minimum two aspects of Growth:

  1. Growth of users (per segment) using mobile data (i.e., data uptake).
  2. Growth of data usage per user segment (i.e., segmentation is important as averages across a whole customer base can be misleading).

i.e., Growth can be decomposed into uptake rate of users  and growth of these users data consumption, i.e., CAGR_Volume = (1 + CAGR_Users) x (1+CAGR_Usage) – 1.

The segmentation should be chosen with some care, although a split in Postpaid and Prepaid should be a minimum requirement. Further refinements would be to include terminal type & capabilities, terminal OS, usage categories, pricing impacts, etc.. and we see that the growth prediction process very rapidly gets fairly complex, involving a high amount of uncertain assumptions. Needless to say that Growth should be considered per Access Technology, i.e., split in GPRS/EDGE, 3G/HSPA, LTE/LTE-a and WiFi.

Let’s have a look at (simple) technology growth of a new technology or in other words the adaptation rate.

The above chart illustrates the most common uptake trend that we observe in mobile networks (and in many other situations of consumer product adaptation). The highest growth rates are typically observed in the beginning. Over time the growth rate slows down as saturation is reached. In other words the source of growth has been exhausted.

At Day ZERO there where ZERO 3G terminals and their owners.

At Day ONE some users had bought 3G terminals (e..g, Nokia 6630).

Between Zero and Some, 3G terminals amounts to an Infinite growth rate … So Wow! … Helpful … Not really!

Some statistics:

In most countries it has taken on average 5 years to reach a 20% 3G penetration.

The KA moment of 3G uptake really came with the introduction of the iPhone 3 (June 9 2008) and HTC/Google G1 (October 2008) smartphones.

Simplified example: in 4 years a Mobile Operator’s 3G uptake went from 2% to 20%. An compounded annual growth rate (CAGR) of at least 78%. Over the same period the average mobile (cellular!) data consumption per user increased by a factor 15 (e.g., from 20MB to 300MB), which gives us a growth rate of 97% per anno. Thus the total volume today is at least 150 times that of 4 years ago or equivalent to an annual growth rate 250%!

Geoffrey A. Moore’s book “Crossing the Chasm” (on Marketing and Selling High-Tech products to mainstream customers) different segmentation of growth have been mapped out in (1) Innovators (i.e., first adopters), (2) Early Adoptors, (3) Early Majority, (4) Late Majority and (5) The Laggards.

It is fairly common to ignore the Laggards in most analysis, as these do not cause direct problems for new technology adaptation. However, in mobile networks Laggards can become a problem if they prevent the operator to re-farm legacy spectrum by refusing to migrate, e.g., preventing GSM 900MHz spectrum to be re-purposed to UMTS or GSM 1800 to be re-purposed to LTE.

Each of the stages defined by Geoffrey Moore correspond to a different time period in the life cycle of a given product and mapped to above chart on technology uptake looks like this:

In the above “Crossing the Chasm” chart I have imposed Moore’s categories on a logistic-like (or S-curve shaped) cumulative distribution function rather than the Bell Shaped (i.e., normal distribution) chosen in his book.

3G adaptation has typically taken ca. 5+/-1 years from launch to reach the stage of Early Majority.

In the mobile industry its fairly common for a user to have more than 1 device (i.e., handset typically combined with data stick, tablet, as well as private & work related device split, etc..). In other words, there are more mobile accounts than mobile users.

In 2011, Western Europe had ca. 550 Million registered mobile accounts (i.e., as measured by active SIM Cards) and a population of little over 400 Million. Thus a mobile penetration of ca. 135% or if we consider population with a disposable income 160+%.

The growth of 3G users (i.e., defined as somebody with a 3G capable terminal equipment) have been quiet incredible with initial annual growth rates exceeding 100%. Did this growth rate continue? NO it did NOT!

As discussed previously, it is absolutely to be expected to see very high growth rates in the early stages or technology adaptation. The starting is Zero or Very Low and incremental additions weight more in the beginning than later on in the adaptation process.

The above chart (“CAGR of 3G Customer Uptake vs 3G Penetration”) illustrates the annual 3G uptake growth rate data points, referenced to the year of 10% penetration, for Germany, Netherlands and USA (i.e., which includes CDMA2000). It should be noted that 3G Penetration levels above 50+% are based on Pyramid Research projections.

The initial growth rates are large and then slows down as the 3G penetration increases.

As saturation is reached the growth rate comes almost to a stop.

3G saturation level is expected to be between 70% and 80+% … When LTE takes over!

For most Western European markets the saturation is expected to be reached between 2015 – 2018 and sooner in the USA … LTE takes over!

The (diffusion) process of Technology uptake can be described by S-shaped curves (e.g., as shown in “Crossing the Chasm”). The simplest mathematical description is a symmetric logistic function (i..e, Sigmoid) that only depends on time. The top solid (black) curve shows the compounded annual growth rate, referenced to the Year of 10% 3G penetration, vs 3G penetration. Between 10% and 15% 3G penetration the annual growth rate is 140%, between 10% and 50% its “only” 108% and drops to 65% at 90% 3G penetration (which might never be reached as users starts migrating to LTE).

The lower dashed (black) curve is a generalized logistic function that provides a higher degree of modelling flexibility accounting for non-symmetric adaptation rate pending on the 3G penetration. No attempt of curve fitting to the data has been applied in the chart above. I find the generalized logistic function in general can be made to agree well with actual uptake data. Growth here is more modest with 72% (vs 140% for the Simple Logistic representation), 57% (vs 108%) and 35% (vs 65%). Undershooting in the beginning of the growth process (from 10% ->;20%: Innovators & Early Adopters phase) but representing actual data after 20% 3G penetration (Early and Late Majority).

Finally, I have also included the Gomperz function (also sigmoid) represented by light (grey) dashed line in between the Simple and Generalized Logistic Functions. The Gomperz function has found many practical applications describing growth. The parameters of the Gormperz function can be chosen so growth near lower and upper boundaries are different (i.e., asymmetric growth dynamics near the upper and lower asymptotes).

As most mature 3G markets have passed 50% 3G penetration (i.e., eating into the Late Majority) and approaching saturation, one should expect to see annual growth rates of 3G uptake to rapidly reduce. The introduction of LTE will also have a substantial impact of the 3G uptake and growth.

Of course the above is a simplification of the many factors that should be considered. It is important that you;

  1. Differentiate between Prepaid & Postpaid.
  2. Consider segmentation (e.g., Innovator, First Adopter, Early Majority & Late Majority).
  3. Projections should Self-consistent with market dynamics: i.e., Gross Adds, Churn, hand-down and upgrade dynamics within Base, etc…

THE GROWTH ESSENTIALS – THE CELLULAR USAGE.

In the following I will focus on Cellular (or Mobile) data consumption. Thus any WiFi consumption on public, corporate or residential access points are deliberately not considered in the following. Obviously, in cellular data demand forecasting WiFi usage can be important as it might be a potential source for cellular consumption via on-loading. In particular with new and better performing cellular technologies are being introduced (i.e., LTE / LTE advanced). Also price plan policy changes might result in higher on-load of the cellular network (at least if that network is relative unloaded and with lots of spare capacity).

It should come as no surprise that today the majority of mobile data consumers are Postpaid.

Thus, most of the average data usage being reported are based on the Postpaid segment. This also could imply that projecting future usage based on past and current usage could easily overshoot. Particular if Prepaid consumption would be substantially lower than Postpaid data consumption. The interesting and maybe somewhat surprising is that Active Prepaid mobile data consumers can have a fairly high data consumption (obviously pending price plan policy). In the example shown below, for an Western European Operator with ca. 50%:50% Postpaid – Prepaid mix, the Postpaid active mobile data consumers are 85% of total Postpaid Base. The Mobile Data Active Prepaid base only 15% (though growing fast).

The illustrated data set, which is fairly representative for an aggressive smartphone operation, have an average data consumption of ca. 100MB (based on whole customer base) and an Active Average consumption of ca. 350MB. Though fairly big consumptive variations are observed within various segments of the customer base.

The first 4 Postpaid price plans are Smartphone based (i.e., iOS and Android) and comprises 80% of all active devices on the Network. “Other Postpaid” comprises Basic Phones, Symbian and RIM devices. The Active Prepaid device consumption are primarily Android based.

We observe that the following:

  1. Unlimited price plan results in the highest average volumetric usage (“Unlimited Postpaid” & “Postpaid 1″ price plans are comparable in device composition. The difference is in one being unlimited the other not).
  2. Unlimited average consumption dominated by long tail towards extreme usage (see chart below).
  3. Smartphone centric postpaid price plans tend to have a very high utilization percentage (90+%).
  4. Active Prepaid Data Consumption (200MB) almost as high as less aggressive smartphone (210MB) price plans (this is however greatly depending on prepaid price policy).

The above chart “Cellular Data Consumption Distribution” illustrates the complexity of technology and cellular data consumption even within different price plan policies. Most of the distributions consist of up-to 4 sub-segments of usage profiles.Most notably is the higher consumption segment and the non-/very-low consumptive segment.

There are several observations worth mentioning:

  • Still a largely untapped Prepaid potential (for new revenue as well as additional usage).
  • 15% of Postpaid consumers are data inactive (i.e., Data Laggards).
  • 40% of active Postpaid base consumes less than 100MB or less than 1/4 of the average high-end Smartphone usage.

Clearly, the best approach to come to a meaningful projection of cellular data usage (per consumer) would be to consider all the above factors in the estimate.

However, there is a problem!

The Past Trends may not be a good basis for predicting Future Trends!

Using The Past we might risk largely ignoring:

  1. Technology Improvements that would increase cellular data consumption.
  2. New Services that would boost cellular data usage per consumer.
  3. New Terminal types that would lead to another leapfrog in cellular data consumption.
  4. Cellular Network Congestion leading to reduced growth of data consumption (i.e., reduced available speed per consumer, QoS degradation, etc..).
  5. Policy changes such as Cap or allowing Unlimited usage.

Improvements in terminal equipment performance (i.e., higher air interface speed capabilities, more memory, better CPU performance, larger / better displays, …) should be factored into the cellular data consumption as the following chart illustrates (for more details see also Dr. Kim’s Slideshare presentation on “Right Pricing Mobile Broadband: Examing The Business Case for Mobile Broadband”).

I like to think about every segment category has its own particular average data usage consumption. A very simple consideration (supported by real data measurements) would to expect to find the extreme (or very high) data usage in the Innovator and Early Adopter segments and as more of the Majority (Early as well as Late) are considered the data usage reduces. Eventually at Laggards segment hardy any data usage is observed.

It should be clear that the above average usage-distribution profile is dynamic. As time goes by the distribution would spread out towards higher usage (i.e., the per user per segment inflationary consumption). At the same time as increasingly more of the customer base reflects the majority of the a given operators customer base (i.e., early and late majority)

Thus over time it would be reasonable to expect that?

The average volumetric consumption could develop to an average that is lower than when Innovators & Early Adopters dominated.

Well maybe!? Maybe not?!

The usage dynamics within a given price plan is non-trivial (to say the least) and we see in general a tendency towards higher usage sub-segment (i.e., within a given capped price plan). The following chart (below) is a good example of the data consumption within the same Capped Smartphone price plan over an 12 month period. The total amount of consumers in this particular example have increased 2.5 times over the period.

It is clear from above chart that over the 12 month period the higher usage sub-segment has become increasingly popular. Irrespective the overall average (including non-active users of this Smartphone price plan) has not increased over the period.

Though by no means does this need to be true for all price plans. The following chart illustrates the dynamics over a 12 month period of an older Unlimited Smartphone price plan:

Here we actually observe a 38% increase in the average volumetric consumption per customer. Over the period the ca. 50% of customers in this price plan have dropped out leaving primarily heavy users enjoy the benefits on unlimited consumption.

There is little doubt that most mature developed markets with a long history of 3G/HSPA will have reached a 3G uptake level that includes most of the Late Majority segment.

However, for the prepaid segment it is also fair to say that most mobile operators are likely only to have started approach and appeal to Innovators and Early Adopters. The chart below illustrates the last 12 month prepaid cellular consumptive behavior.

In this particular example ca. 90% of the Prepaid customer base are not active cellular data consumers (this is not an unusual figure). Even over the period this number has not changed substantially. The Active Prepaid consumes on average 40% more cellular data than 12 month ago. There is a strong indication that the prepaid consumptive dynamics resembles that Postpaid.

Data Consumption is a lot more complex than Technology Adaptation of the Cellular Customer.

The data consumptive dynamics is pretty much on a high level as follows;

  1. Late (and in some case Early) Majority segments commence consuming cellular data (this will drag down the overall average).
  2. Less non-active cellular data consumers (beside Laggards) ->; having an upward pull on the average consumption.
  3. (in particular) Innovator & Early Adopters consumption increases within limits of given price plan (this will tend to pull up the average).
  4. General migration upwards to higher sub-segmented usage (pulling the overall average upwards).
  5. If Capped pricing is implemented (wo any Unlimited price plans in effect) growth will slow down as consumers approach the cap.

We have also seen that it is sort of foolish to discuss a single data usage figure and try to create all kind of speculative stories about such a number.

BRINGING IT ALL TOGETHER.

So what’s all this worth unless one can predict some (uncertain) growth rates!

WESTERN EUROPE (AT, BE, DK, FIN, F, DE,GR,IRL,IT,NL,N,P, ESP, SE, CH, UK,)

3G uptake in WEU was ca. 60% in 2011 (i.e., ca. 334 Million 3G devices). This correspond to ca. 90% of all Postpaid customers and 32% of all Prepaid users have a 3G device. Of course it does not mean that all of these are active cellular data users. Actually today (June 2012) ca. 35% of the postpaid 3G users can be regarded as non-active cellular user and for prepaid this number may be as high as 90%.

For Western Europe, I do not see much more 3G additions in the Postpaid segment. It will be more about replacement and natural upgrade to higher capable devices (i.e., higher air interface speed, better CPU, memory, display, etc..). We will see an increasing migration from 3G Postpaid towards LTE Postpaid. This migration will really pick-up between 2015 and 2020 (Western Europe lacking behind LTE adaptation in comparison with for example USA and some of the Asian Pacific countries). In principle this could also mean that growth of 3G postpaid cellular data consumption could rapidly decline (towards 2020) and we would start seeing overall cellular data usage decline rather than increase of 3G Postpaid data traffic.

Additional Cellular data growth may come from the Prepaid segment. However, still a very large proportion of this segment is largely data in-active in Western Europe. There are signs that, depending on the operator prepaid price plan policy, prepaid consumption appears to be fairly similar to Postpaid on a per user basis.

3G Growth Projections for Western Europe (reference year 2011):

Above assumes that usage caps will remain. I have assumed this to be 2GB (on average for WEU). Further in above it is assumed that the Prepaid segment will remain largely dominated by Laggards (i.e., in-active cellular data users) and that the active Prepaid cellular data users have consumption similar to Postpaid.

Overall 3G Cellular data growth for Western Europe to between 3x to no more than 4x (for very aggressive prepaid cellular data uptake & growth) over the period 2011 to 2016.

Postpaid 3G Cellular data growth will flatten and possible decline towards the end of 2020.

More agresive LTE Smartphone uptake (though on average across Western European appears unlikely) could further release 3G growth pains between 2015 – 2020.

Innovators & Early Adopters, who demand the most of the 3G Cellular Networks, should be expected to move quickly to LTE (as coverage is provided) off-loading the 3G networks over-proportionally.

The 3G cellular growth projections are an Average consideration for Western Europe where most of the postpaid 3G growth has already happen with an average of 60% overall 3G penetration. As a rule of thumb: the lower the 3G penetration the higher the CAGR growth rates (as measured from a given earlier reference point).

In order to be really meaningful and directly usable to a Mobile Operator, the above approach should be carried out for a given country and a given operator conditions.

The above growth rates are lower but within range of my Twitter Colleague Dean Bubley (@Disruptivedean) states as his expectations for Developed Markets in his Blog “Mobile data traffic growth – a thought experiment and forecast”. Not that it makes it more correct or more wrong! Though for any one who spend a little time on the growth fundamentals of existing Western European mobile data markets would not find this kind of growth rate surprising.

So what about LTE growth? … well given that we today (in Western Europe) have very very little installed base LTE devices on our networks … the growth or uptake (seen as on its own) is obviously going to be very HIGH the first 5 to 7 years (depending on go to market strategies).

What will be particular interesting with the launch of LTE is whether we will see an on-loading effect of the cellular LTE network from todays WiFi usage. Thomas Wehmeier (Principal Analyst, Telco Strategy, Informa @Twehmeier) has published to very interesting and study worthy reports on Cellular and WiFi Smartphone Usage (see “Understanding today’s smartphone user: Demystifying data usage trends on cellular & Wi-Fi networks” from Q1 2012 as well as Thomas’s sequential report from a couple of weeks ago “Understanding today’s smartphone user: Part 2: An expanded view by data plan size, OS, device type and LTE”).

THE CLIFFHANGER

Given the dramatic beginning of my Blog concerning the future of the Mobile Industry and Cellular data … and to be fair to many of the valid objections that Dean Bubley‘s has raised in his own Blog and in his Tweets … I do owe the reader who got through this story some answer …

I have no doubt (actually I know) that there mobile operators (around the world) that already today are in dire straits with their spectral resources due to very aggressive data growth triggered by the Smartphone. Even if growth has slowed down as their 3G customers (i.e., Postpaid segment) have reached the Late Majority (and possible fighting Laggards) that lower growth rate still causes substantial challenges to provide sufficient capacity & not to forget quality.

Yes … 3G/HSPA+ Small Cells (and DAS-like solutions) will help mitigate the growing pains of mobile operators, Yes … WiFi off-load too, Yes … LTE & LTE-advanced too will help. Though the last solution will not be much of a help before critical mass of LTE terminals have been reached (i.e., ca. 20% = Innovators + Early Adopters).

Often forgotten is traffic management and policy remedies (not per see Fair Use Policy though!) are of critical importance too in the toolset of managing cellular data traffic.

Operators in emerging markets and in markets with a relative low 3G penetration, better learn the Growth Lessons from the AT&T’s and other similar Front Runners in the Cellular Data and Smartphone Game.

  1. Unless you manage cellular data growth from the very early days, you are asking for (in-excusable) growth problems.
  2. Being Big in terms of customers are not per see a blessing if you don’t have proportionally the spectrum to support that Base.
  3. Don’t expect to keep the same quality level throughout your 3G Cellular Data life-cycle,!
  4. Accept that spectral overhead per customer obviously will dwindle as increasingly more customers migrate to 3G/HSPA+.
  5. Technology Laggards should be considered as the pose an enormous risk to spectral re-farming and migration to more data efficient technologies.
  6. Short Term (3 – 5 years) … LTE will not mitigate 3G growing pains (you have a problem today, its going to get tougher and then some tomorrow).

Is Doom knocking on Telecom’s Door? … Not very Likely (or at least we don’t need to open the door if we are smart about it) … Though if an Operator don’t learn fast and be furiously passionate about economical operation and pricing policies … things might look a lot more gloomy than what needs to be.

STAY TUNED FOR A PART 2 … taking up the last part in more detail.

ACKNOWLEDGEMENT

To great friends and colleagues that have challenged, suggested, discussed, screamed and shouted (in general shared the passion on this particular topic of Cellular Data Growth) about this incredible important topic for our Mobile Industry (and increasingly Fixed Broadband). I am in particular indebted to Dejan Radosavljevik for bearing with my sometimes crazy data requests (at odd h0urs and moments) and last but not least thinking along with me on what mobile data (cellular & WiFi) really means (though we both have come to the conclusion that being mobile is not what it means. But that is a different interesting story for another time).

Social Media Valuation …. a walk on the wild side.

Lately I have wondered about Social Media Companies and their Financial Valuations. Is it hot air in a balloon that can blow up any day? Or are the hundred of millions and billions of US Dollars tied to Social Media Valuations reasonable and sustainable in the longer run? Last question is particular important as more than 70% of the value in Social Media are 5 or many more years out in the Future.  Social Media startup companies, without any turnover, are regularly being  bought for, or able to raise money at a value, in the hundreds of millions US dollar range. Lately, Instagram was bought by Facebook for 1 Billion US Dollar. Facebook itself valued at a $100B at its IPO. Now several month after their initial public offering, Facebook may have lost as much as 50% of the originally claimed IPO value.

The Value of Facebook, since its IPO,  has lost ca. 500 Million US Dollar per day (as off 30-July-2012).

What is the valuation make-up of Social Media? And more interestingly what are the conditions that need to be met to justify $100B or $50B for Facebook, $8B for Twitter, $3B (as of 30-July-2012, $5B prior to Q2 Financials) or $1B for Instagram, a 2 year old company with a cool mobile phone Photo App? Is the Social Media Business Models Real? or based on an almost religious belief that someday in the future it will Return On Investment. Justifying the amount of money pumped into it?

My curiosity and analytical “hackaton” got sparked by the following Tweet:

Indeed! what could possible justify paying 1 Billion US Dollar for Instagram, which agreeably has a very cool FREE Smartphone Photo App (far better than Facebook’s own), BUT without any income?

  • Instagram, initially an iOS App, claims 50 Million Mobile Users (ca. 5 Million unique visitors and 31 Million page-views as of July 2012). 5+M photos are uploaded daily with a total of 1+ Billion photos uploaded. No reported revenues to date. Prior to being bought by Facebook for $1 Billion, was supposed to have been prepared for a new founding round valued at 500 Million US$.
  • Facebook has 900M users, 526M (58%) active daily and 500M mobile users (May 2012). 250M photos are uploaded daily with a total of 150 Billion photos. Facebook generated ca. $5B in revenue in 2011 and current market cap is ca. $61B (24 July 2012). 85% of FB revenue in 2011 came from advertisement.

The transaction gives a whole new meaning to “A picture is worth a Billion words”  … and Instagram is ALL about PICTURES & SOCIAL interactions!

Instagram is a (really cool & simple) mobile & smartphone optimized App. Something that would be difficult to say about FB’s mobile environment (in particular when it comes to photo experience).

One thing is of course clear. If FB is willing to lay down $1B for Instagram, their valuation should be a good deal higher than $1B (i.e., ca. $4+B?). It will be very interesting to see how FB plans to monetize Instagram. Though the acquisition might be seen as longer-outlook protective move to secure Facebook’s share of the Mobile Market, which for Social Media will become much more important than the traditional desktop access.

So how can we get a reality check on a given valuation?

Lets first look at the main Business Models of today (i.e., how the money will be or are made);

  1. Capture advertising spend – typically online advertisement spend (total of $94B in 2012 out of an expected total Media Ad spend of $530B). With uptake of tablets traditional “printed media” advertising spend might be up for grabs as well (i.e., getting a higher share of the total Media Ad spend).
  2. Virtual Goods & credits (e.g., Zynga’s games and FB’s revenue share model) – The Virtual Economy has been projected to be ca. $3B in 2012 (cumulative annual growth rate of 35% from 2010).
  3. Payed subscriptions (e.g., LinkedIn’s Premium Accounts: Business Plus, Job Seeker, etc or like Spotify Premium, etc..).
  4. B2B Services (e.g.,. LinkedIn’s Hiring Solutions).

The Online Advertisement Spend is currently the single biggest source of revenue for the Social Media Business Model. For example Google (which is more internet search than Social Media) takes almost 50% of the total available online advertisement spend and it accounts for more than 95% of Google’s revenues. In contrast, Facebook in 2011 only captured ca. 4+% of Online Ad Spend which accounted for ca. 85% of FB’s total revenue. By 2015 eMarketeer.com (see http://www.emarketer.com/PressRelease.aspx?R=1008479) has projected the total online advertisement spend could be in the order of $132B (+65% increase compared to 2011). USA and Western Europe is expected to account for 67% of the $132B by 2015.

Virtual Goods are expected to turn-over ca. $3B in 2012. The revenue potential from Social Networks and Mobile has been projected (see Lazard Capital’s Atul Bagga ppt on “Emerging Trends in Games-as-a-Service”) to be ca. $10B worldwide by 2015. If (and that is a very big if) the trend would continue the 2020 potential would be in the order of $60B (though I would expect this to be a maximum and very optimistic upside potential).

So how can a pedestrian get an idea about Social Media valuation? How can one get a reality check on these Billionaires being created en mass at the moment in the Social Media sphere?

“Just for fun” (and before I get really “serious”) I decided see whether there is any correlation between a given valuation and the number of Unique Visitors (per month) and Pageviews (per month) … my possible oversimplified logic would be that if the main part of the Social Media business model is to get a share of the Online Advertisement Spending there needs to be some sort of dependency on the those (i..e, obviously whats really important is the clickthrough (rate) but lets be forget this for a moment or two):

 The two charts (log-log scaled) shows Valuation (in Billion US$) versus Unique Visitors (in Millions) and Pageviews (in Billions). While the correlations are not perfect, they are really not that crazy either. I should stress that the correlations are power-law correlations NOT LINEAR, i.e., Valuation increases with power of unique and active users/visitors.

An interesting out-lier is Pinterest. Let’s just agree that this does per see mean that Pinterest’s valuation at $1.5B is too low! … it could also imply that the rest are somewhat on the high side! 😉

Note: Unique Visitors and Pageview statistics can be taken from Google’s DoubleClick Ad Planner. It is a wonderful source of domain attractiveness, usage and user information.

Companies considered in Charts: Google, Facebook, Yahoo, LinkedIN, Twitter, Groupon, Zynga, AOL, Pinterest, Instagram (@ $1B), Evernote, Tumblr, Foursquare, Baidu.

That’s all fine … but we can (and should) do better than that!

eMarketeer.com has given us a Online Advertisement Spend forecast (at least until 2015). In 2011, the Google’s share amounted to 95% of their revenue and for Facebook at least 85%. So we are pretty close to having an idea of the Topline (or revenue) potential going forward. In addition, we also need to understand how that Revenue translates into Free Cash Flow (FCF) which will be the basis for my simple valuation analysis. To get to a Free Cash Flow picture we could develop a detailed P&L model for the company of interests. Certainly an interesting exercise but would require “Millions” of educated guesses and assumptions for a business that we don’t really know.

Modelling a company’s P&L is not really a peaceful walk for our interested pedestrian to take.

A little research using Google Finance, Yahoo Finance or for example Ycharts.com (nope! I am not being sponsored;-) will in general reveal a typical cash yield (i.e., amount of FCF to Revenue) for a given type of company in a given business cycle.

Examples of FCF performance relative to Revenues: Google for example has had an average FCF yield of 30% over the last 4 years, Yahoo’s 4 year average was 12% (between 2003 and 2007 Google and Yahoo had farily similar yields ).  Facebook has been increasing its yield steadily from 2009 (ca. 16%) to 2011 (ca. 25%), while Zynga had 45% in 2010 and then down to 13% in 2011.

So having an impression of the revenue potential (i.e., from eMarketeer) and an idea of best practice free cash flow yield, we can start getting an idea of the Value of a given company. It should of course be clear that we can also turn this Simple Analysis around and ask what should the Revenue & Yield be in order to justify a given valuation. This would give a reality check on a given valuation as the Revenue should be in reasonable relation to market and business expectations.

Lets start with Google (for the moment totally ignoring Motorola;-):

Nothing fancy! I am basically assuming Google can keep their share of Online Advertising Spend (as taken from eMarketeer) and that Google can keep their FCF Yield at a 30% level. The discount rate (or WACC) of 9% currently seems to be a fair benchmark (http://www.wikiwealth.com/wacc-analysis:goog). I am (trying) to be conservative and assumes a 0% future growth rate (i.e., changing will in general have a high impact on the Terminal Value). If all this comes true, Google’s value would be around 190 Billion US Dollars. Today (26 July 2012) Google Finance tells me that their Market Capitalization is $198B (see http://www.google.com/finance?q=NASDAQ:GOOG) which is 3% higher than the very simple model above.

How does the valuation picture look for Facebook (pre-Zynga results as of yesterday 25 July 2012):

First thought is HALLELUJAH … Facebook is really worth 100 Billion US Dollars! … ca. $46.7 per share… JAIN (as they would say in Germany) … meaning YESNO!

  • Only if Facebook can grow from capturing ca. 6% of the Online Advertisement Spend today to 20% in the next 5 – 6 years.
  • Only if Facebook can improve their Free Cash Flow Yield from today’s ca. 25% to 30%.
  • Only if Facebooks other revenues (i.e., from Virtual Goods, Zynga, etc..) can grow to be 20% of their business.

What could possible go wrong?

  • Facebook fatigue … users leaving FB to something else (lets be honest! FB has become a very complex user interface and “sort of sucks” on the mobile platforms. I guess one reason for Instagram acquisition).
  • Disruptive competitors/trends (which FB cannot keep buying up before they get serious) … just matter of time. I expect this to happen first in the Mobile Segment and then spread to desktop/laptop.
  • Non-advertisement revenues (e.g., from Virtual Goods, Zynga, etc..) disappoints.
  • Need increasing investments in infrastructure to support customer and usage growth (i.e., negative impact on cash yields).
  • The Social Media business being much more volatile than current hype would allow us to assume.

So how would a possible more realistic case look like for Facebook?

Here I assume that Facebook will grow to take 15% (versus 20% above) of the Online Ad spend. Facebook can keep a 25% FCF Yield (versus growing to 30% in the above model). The contribution from Other Revenues has been brought down to a more realistic level of the Virtual Goods and Social Media Gaming expectations (see for example Atul Bagga, Lazard Capital Markets, analysis http://twvideo01.ubm-).

The more conservative assumptions (though with 32% annual revenue growth hardly a very dark outlook) results in a valuation of $56 Billion (i.e., a share price of ca. $26). A little bit more than half the previous (much) more optimistic outlook for Facebook. Not bad at all of course … but maybe not what you want to see if you paid a premium for the Facebook share? Facebook’s current market capitalization (26 July 2012, 18:43 CET) is ca. $60B (i..e, $28/share).

So what is Facebooks value? $100B (maybe not), $50+B? or around $60+B? Well it all depends on how shareholders believe Facebook’s business to evolve over the next 5 – 10 (and beyond) years. If you are in for the long run it would be better to be conservative and keep the lower valuation in mind rather than the $100B upside.

Very few of us actually sit down and do a little estimation ourselves (we follow others = in a certain sense we are financial lemmings). With a little bit of Google Search (yes there is a reason why they are so valuable;-) and a couple of lines of Excel (or pen and paper) it is possible to get an educated idea about a certain valuation range and see whether the price you paid was fair or not.

Lets just make a little detour!

Compare Facebook’s current market capitalization of ca. $60B (@ 26 July 2012, 18:43 CET) at $3.7B Revenue (2011) and ca. $1B of free cash flow (2011). Clearly all value is in anticipation of future business! Compare this with Deutsche Telecom AG with a market capitalization of ca. $50B at $59B (2011, down -6% YoY2010) and ca. $7.8B of free cash flow (2011). It is Fascinating that a business with well defined business model, paying customers, healthy revenue (16xFB) and cash flow (8xFB) can be worth a lot less than a company that relies solely on anticipation of a great future.  Facebook’s / Social Media Business Model future appear a lot more optimistic (the blissfull unknown) than the Traditional Telco Business model (the known” unknown). Social Media by 2015 is a game of maybe a couple of hundred Billions (mainly from advertisement, app sales and virtual economy) versus the Telecom Mobile (ignoring the fixed side) of a Trillion + (1,000 x Billion) business.

Getting back to Social Media and Instragram!

So coming back to Instagram … is it worth paying $1B for?

Let’s remind ourselves that Instagram is a Mobile Social Media Photo sharing platform (or Application) serving Apple iOS (originally exclusively so) and Android. Instagram has ca. 50+M registered users (by Q1’2012) with 5+M photos uploaded per day with a total of 1+B photos uploaded. The Instagram is a through-rough optimized smartphone application. There are currently more than 460+ photo  apps with 60Photos being a second to Instagram in monthly usage (http://www.socialbakers.com/facebook-applications/category/70-photo).

Anyway, to get an idea about Instagram’s valuation potential, it would appear reasonable to assume that their Business Model would target the Mobile Advertisement Spend (which is a sub-set of Online Ad Spend). To get somewhere with our simple valuation framework I assume:

  1. that Instagram can capture up to 10% of the Mobile Adv Spend by 2015 – 2016 (possible Facebook boost effect, better payment deals. Keep ad revenue with Facebook).
  2. Instagram’s  a revenue share dynamics similar to Facebooks initial revenue growth from Online Ad Spend (possible Facebook boost effect, better payment deals. Keep ad revenue with Facebook).
  3. Instagram could manage a FCF Yield to 15% over the period analysed (there could be substantial synergies with Facebook capital expenditures).

In principle the answer to that question above is YES paying $1B for Instagram would be worth it as we get almost $5B from our small and simple valuation exercise … if one believes;

  1. Instagram can capture 10% of the Mobile Advertisement Spend (over the next 5 – 6 years).
  2. Instagram can manage a Free Cash Flow Yield of at least 15% by Year 6.

Interesting looking at the next 5 years would indicate a value in the order of $500M. This is close to the rumored funding round that was in preparation before Facebook laid down $1B. However and not surprising most of the value for Instagram comes from the beyond 5 years. The Terminal Value amounts to 90% of the Enterprise Value.

For Facebook to breakeven on their investment, Instagram would need to capture no more than 3% of the Mobile Ad Spend over the 5 year period (assuming that the FCF Yield remain at 10% and not improving due to scale).

Irrespective;

Most of the Value of Social Media is in the Expectations of the Future.

70+% of Social Media Valuation relies on the Business Model remaining valid beyond the first 5 years.

With this in mind and knowing that we the next 5 years will see a massive move from desktop dominated Social Media to Mobile dominated Social Media, should make us somewhat nervous about desktop originated Social Media Businesses and whether these can and will make the transformation.

The question we should ask is:

Tomorrow, will today’s dot-socials be yesterday’s busted dot-coms?

PS

For the pedestrian that want to get deeper into the mud of valuation methodologies I can really recommend “Valuation: Measuring & Managing the Value of Companies” by Tim Koller, Marc Goedhart & David Wessels (http://www.amazon.com/Valuation-Measuring-Managing-Companies-Edition/dp/0470424656). Further there are some really cool modelling exercises to be done on the advertisement spend projections and the drivers behind as well as a deeper understand (i.e., modeling) of the capital requirements and structure of Social Media Business Models.

In case of interest in the simple models used here and the various sources … don’t be a stranger … get in touch!

PSPS (as of 28-July-2012) – A note on Estimated Facebook Market Capitalization

In the above Facebook valuation commentary I have used the information from Google Finance (http://www.google.com/finance?q=facebook) and Yahoo Finance (http://finance.yahoo.com/q?s=FB) both basing their Market Capitalization estimation on 2.14B Shares. MarketWatch (http://www.marketwatch.com/investing/stock/fb) appear to use 2.75B shares (i.e., 29% high than Google & Yahoo). Obviously, MarketWatch market capitalization thus are higher than what Google & Yahoo would estimate.

Mobile Data Consumption, the Average Truth? the Average Lie?

“Figures often beguile me” leading to the statement that “There are three kinds of lies: lies, damned lies, and statistics.” (Mark Twain, 1906).

We are so used to averages … Read any blog or newspaper article trying to capture a complex issue and its more than likely that you are being told a story of averages … Adding to Mark Twain’s quote on Lies, in our data intense world ” The Average is often enough the road to an un-intentional Lie” .. or just about “The Average Lie” .

Imagine this! Having (at the same time) your feet in the oven at 80C and you head in the freezer at -6C … You would be perfectly OK! On average! as your average temperature would equal 80C + (-6C) divided by 2 which is 37C, i.e., the normal and recommended body temperature for an adult human being. However both your feet and your head is likely to suffer from such an experiment (and therefore really should not be tried out … or left to Finns used to Sauna and Icy water … though even the Finns seldom enjoyed this simultaneously).

Try this! Add together the age of the members your household and divide by the number of members. This would give you the average age of your household … does the average age you calculated have any meaning? … if you have young children or grandparents living with you, I think that there is a fairly high chance that the answers to that question is NO! …  The average age of my family”s household is 28 years. However, this number is a meaningless average representation of my household. It is 20 times higher than my sons age and about 40% lower than my own age.

Most numbers, most conclusions, most stories, most (average) analysis are based on an average representation of one or another Reality …. and as such can easily lead to Reality Distortion.

When we are presented with averages (or mean values as it is also called in statistics), we tend to substitute Average with Normal and believe that the story represents most of us (i.e., statistically this means about 68% of us all). More often than not we sit back with the funny feeling that if what we just read is “normal” then maybe we are not.

On mobile data consumption (I ll come back to Smartphone data consumption a bit later) … There is one (non-average) truth about mobile data consumption that has widely (and correctly) been communicated …

Very few mobile customers (10%) consumes the very most of the mobile data traffic (90%).

(see for example: http://www.nytimes.com/2012/01/06/technology/top-1-of-mobile-users-use-half-of-worlds-wireless-bandwidth.html/).

Lets just assume that a mobile operator make claim to an average 200MB monthly consumption (source: http://gigaom.com/broadband/despite-critics-cisco-stands-by-its-data-deluge/). Lets assume that 10% of customer base generating 90% of the traffic. It follows that the high usage segment has an average  volumetric usage of 1,800MB and the low usage segment an average volumetric usage of only 22MB.  In other words 10% of the customer base have 80+ times higher consumption than the remaining 90%. The initial average consumption (taken across the whole customer base) of 200MB communicated is actually 9 times higher than the average consumption of 90% of the customer base. It follows (with some use case exceptions) that the 10% high usage segment spends a lot more Network Resources and Time. The time the high usage segment spend actively with their device are likely to be a lot higher than the 90% low usage segment.

The 200MB is hardly normal! It is one of many averages that can be calculated. Obviously 200MB is a lot more “sexy” than to state that 90% of the customer base consumes typically 22MB.

Created using PiktoChart http://app.piktochart.com.

Do Care about Measurement and Data Processing!

What further complicates consumptive values being quoted is how the underlying data have been measured, processed and calculated!

  1. Is the averaging done over the whole customer base?,
  2. Is the averaging done over active customers?, or
  3. A subset of active customers (i.e., 2G vs 3G, 3G vs HSPA+ vs LTE vs WiFi, smartphone  vs basic phone, iPad vs iPhone vs Laptop, prepaid vs postpaid, etc..) or
  4. A smaller subset based on particular sample criteria (i.e., iOS, Android, iPad, iPhone, Galaxy, price plan, etc..) or availability (mobile Apps installed, customer approval, etc..).  or …

Without knowing the basis of a given average number any bright analysis or cool conclusion might be little more than Conjecture or Clever Spin.

On Smartphone Usage

One the most recent publicized studies on Smartphone usage comes from O2/Telefonica UK (Source: http://mediacentre.o2.co.uk/Press-Releases/Making-calls-has-become-fifth-most-frequent-use-for-a-Smartphone-for-newly-networked-generation-of-users-390.aspx). The O2 data provides an overview of average daily Smartphone usage across 10 use case categories.

The O2’s Smartphone statistics have been broken down in detail by one of our industry”s brightest Tomi Ahonen (A Must Read http://www.communities-dominate.blogs.com/ though it is drowning in his Nokia/Mr. Elop “Howler Letters”). Tomi points out the Smartphone’s disruptive replacement potential of many legacy consumer products (e.g., think: watch, alarm clock, camera,  etc..).

The O2 Smartphone data is intuitive and exactly what one would expect! Boring really! Possible with the exception of Tomi’s story telling (see above reference)! The data was so boring that The Telegraph (source: http://www.telegraph.co.uk/technology/mobile-phones/9365085/Smartphones-hardly-used-for-calls.html) had to conclude that “Smartphones Hardly Used for Calls”. Relative to other uses of course not really an untruth.

Though The Telegraph did miss 9or did not care) the fact that both Calls and SMS appeared to be what one would expect (and why would a Smartphone generate more Voice and SMS than Normal? … hmmmm). Obviously, the Smartphone is used for a lot of other stuff than calling and SMSing! The data tells us that an average Smartphone user (whatever that means) spend ca. 42 minutes on web browsing and social networking while “only” 22 minutes on Calls and SMS (i.e., actually 9 minutes of SMS sounds more like a teenager than a high-end smartphone user … but never mind that!). There are lots of other stuff going on with that Smartphone. In fact out of the total daily usage of 128 minutes only 17% of the time (i.e., 22 minutes) is used for Plain Old Mobile Telephony Services (The POMTS). We do however find that both voice minutes and legacy messaging consumption are declining faster in the Smartphone segment than for Basic Phones (which are declining rapidly as well) as OTT Mobile Apps alternatives substitute POMTS (see inserted chart from http://www.slideshare.net/KimKyllesbechLarsen/de-risking-the-broadband-business-model-kkl2411201108x).

I have no doubt that the O2 data represents an averaging across a given Smartphone sample, the question is how does this data help us to understand the Real Smartphone User and his behavior.

So how did O2 measure this data?

(1) To be reliable and reasonable, data collection should be done by an App residing in the O2 customer’s smartphone. An alternative (2) would be deep packet inspection (dpi) but this would only capture network usage which can (and in most cases will be) very different from the time the customer actively uses his Smartphone. (3) Obviously the data could also be collected by old fashion Questionnaires being filled in. This would be notoriously unreliable and I cannot imagine this being the source.

Thus, I am making the reasonable guess that the Smartphone Data Collection is mobile App based.

“Thousand and 1 Questions”: Does the data collected represents a normal O2 Smartphone user? or a particular segment that don’t mind having a Software Sniffer (i.e., The Sniffer) on the used device reporting his behavior? Is “The Sniffer” a standard already installed (and activated?) App on all Smartphone devices?, only on a certain segment? or is it downloadable? (i..e, which would require a certain effort from the customer), is the collection done for both prepaid & contract customers, both old and new smartphones (i.e., usage patterns depends on OS version/type, device capabilities such as air interface speed DL & UL, CPU, memory management, etc..) … is WiFi included or excluded?, what about Apps running in the background (are these included), etc…

I should point out that it is always much easier to poke at somebody else data analysis than it often is to collect, analyse and present such data. Though, depending on the answer to the above “1,000 + 1” questions the O2 data either becomes a fair representation of an O2 Smartphone customer or “just” an interesting data point for one of their segments.

If the average Smartphone cellular (i.e., no WiFi blend) monthly consumption in UK is ca. 450MB (+/-50MB) and if the consumer had on average cellular speed of 0.5Mbps (i.e., likely conservative with exception of streaming services which could be lower), one would expect that Time spend consuming Network Resources would be no more than 120 minutes per month or 5 minutes per day (@ R99 384kbps this would be ca. 6 min per day). If I would chose a more sophisticated QoS distribution, the Network Consumption Time would anyway not change with an order of magnitude or more.

So we have 5 minutes of Mobile Data Network Time Consumption daily versus O2’s Smartphone usage time of 106 minutes (wo Calls & SMS) … A factor 22 in difference!

For every minutes of mobile data network consumption the customer spends 20+ minutes actively with his device (i.e., reading, writing, playing, etc..).

So …. Can we trust the O2 Smartphone data?

Trend wise the data certainly appear reasonable! Whether the data represents a majority of the O2 smartphone users or not … I doubt somewhat. However, without having a more detailed explanation of data collection, sampling, and analysis it’s difficult to conclude how representable the O2 Smartphone data really is for their Smartphone customers.

Alas this is the problem with most of the mobile data user and usage statistics being presented to the public as an average (i.e., have had my share of this challenge as well).

Clearly we spend a lot more time with our device than the device spends actively at the mobile network. This trend has been known for a long time from the fixed internet. O2 points out that the Smartphone, with its mobile applications, has become the digital equivalent to a “Swiss Army Knife” and as a consequence (as Tomi also points out in his Blog) already in the process of replacing a host of legacy consumer devices, such as the watch, alarm clock, camera (both still pictures and video), books, music radios, and of course last but not least substituting The POMTS.

I have made argued and shown examples that Average Numbers we are presented with are notorious by character. What other choices do we have?  Would it be better to report the Median? rather than the Average (or  Mean)? The Median divides a given consumptive distribution in half (i.e., 50% of customers have a consumption below the Median and 50% above). Alternative we could report the Mode which would give us the most frequent consumption across our consumer distribution.

Of course if consumer usage was distributed normally (i.e., symmetric bell shaped) Mean, Median and Mode would be one and the same (and we would all be happy and bored). Not so much luck!

Most consumptive behaviors tends to be much more skewed and asymmetric (i.e., “the few takes the most”) than the normal distribution (that most of us instinctively uses when we are presented with figures). Most people are not likely to spend much thought on how a given number is calculated. However, it might be constructive to provide a %tage of the customers for which their usage is below the reported average. The reader should however note that in case the percentage figure is different from 50%, the consumptive distribution is skewed and

onset of Reality Distortion has occurred.