Licensed to do anything
Usage rights will be key for the future communications spectrum auctions.
If you want to transmit radio signals then, unless you are operating licence-exempt equipment, you need a licence to do so. Up until now, radio licences have been more or less specific to a particular technology or application. This is fine if that happens to be the application you have in mind, but if you want to do something different, perhaps because technology has moved on, it is not so helpful. A licence that was flexible, allowing a range of technologies and uses would enable rapid innovation and will unlock substantial additional value from the use of radio.
Licences broadly can either control the power transmitted or the interference caused. Controlling the transmitted power through a 'mask' that defined the maximum power levels across a range of frequencies is only a weak control on the interference caused to neighbours. This is because were the licence holder to change the density of deployment of their base stations, perhaps as a result of changing the use that they put the spectrum to, then the interference suffered by the neighbour would increase despite the licence holder remaining within their mask. Significant changes in deployment, for example from a UMTS frequency division duplex (FDD) system to a WiMax time division duplex (TDD) system could have a very severe impact on neighbours even though it might appear that the transmitted power had decreased.
So transmit restrictions, or 'block edge masks (BEMs)' as they are coming to be known, are not the optimal way to control interference into neighbouring channels. Where such control of interference is important, a licence that specifies the maximum level of interference that could be caused, rather than the power that could be transmitted, is a superior way of licensing since it directly controls the problem. We call this form of licensing spectrum usage rights (SURs). SURs are generally better because they place only those restrictions on licence holders that are needed to protect neighbours, and no more. At the same time they provide certainty to neighbours as to the maximum levels of interference that they can expect, providing greater certainty for investors that the network they have paid to deploy will not suffer reduced capacity or need expensive re-engineering as a result of neighbours changing their usage.
Determining the numbers to put into the licence terms requires an estimate of the likely transmit power, the likely base station density and some other factors such as base station antenna gain and height and the use of power control mechanisms. These can then be input to a propagation modelling tool which can be used to estimate the interference distribution and hence arrive at the SUR licence terms. The approach we follow is initially for Ofcom to make an estimate as to these parameters and then to consult on these.
With SURs, the actual interference caused, or some approximation to it, must be assessed in order to verify compliance.
One key decision we at Ofcom faced was whether to verify by measurement or modelling. Measurement would involve using radio receivers in a number of locations. It is accurate, if done correctly, but could be expensive and time-consuming to undertake. Modelling estimates the level of interference using propagation modelling tools. It approximates to the interference caused but can be performed quickly and at less cost. Also, because it accords to the manner that networks are planned, it makes it simpler for licence holders to verify as they plan their network that it will not exceed its licence terms. Conversely, with measurement there is some risk that they deploy a network based on their modelling tools only to find that in practice it exceeds their licence terms.
Clearly there are benefits from either approach - measurement is more accurate but modelling is simpler and cheaper. For the moment, after consultation, workshops and discussion, we have concluded that modelling fits better with the needs of licence holders. This then leads to a need to specify clearly how the modelling will be done so that all stakeholders end up with the same results for a given network deployment.
Defining the modelling falls into two parts - specification of the algorithms and necessary data sets and specification of the process and assumptions that need to be made.
There are many propagation models available. Some are standardised, predominantly by the ITU, others proprietary. Different models often apply for different frequency bands and sometimes for different services. Our preferred approach is to consider each band separately and select the ITU model that fits best. Because models often have options or variable parameters, we will then specify in detail exactly how the model is to be implemented so that all parties will get identical results when using it.
Finally, models make use of geographical databases (which show terrain height) and clutter databases (which divide areas into different usage types such as urban, forest, etc). All licence holders need to use identical databases so we will specify the supplier and version number of these in the licence. Finally, as a check, we will model a hypothetical network ourselves and publish the results. Others can then model the same network to confirm that their model delivers the same results.
The next stage in the process is to define the area over which the SURs will be verified. There is a balance here. If the verification area is made very large (e.g. the whole country) then very high levels of interference could be caused in some areas and low interference levels in others. The averaging process would then allow the licence holder to be within the terms of their SUR but this would not provide any real protection to neighbours. If the verification area is made very small so that, say, it only include a part of the coverage area of a single base station, then placing this measurement area near the centre of the cell would deliver very different results from placing it near the edge of the cell. This would lead to excessive variability in results which is not desirable. The solution is to find an area somewhere between these extremes. Our modelling suggests of the order of five to ten cells is optimal.
Unfortunately, the size of cells varies, both with geography and with time as operators split cells. So it is not possible to define a verification area in terms of its dimensions. Instead, we suggest a process whereby the modelling software tracks across the whole of the licence area (generally the whole country). At each intersection point of the OS 1km grid square lines it forms a square and expands the size of this square in discrete steps until it includes at least ten base stations. It then processes the square as described below before moving onto the next grid square intersection.
The processing of each measurement square varies depending on whether the spectrum is being used for downlink (base station transmit) or uplink (mobile transmit).
If it is a downlink then the first step is to input into the model the location of the base stations, their height, transmit power and antenna patterns. Then the modelling tool needs to assess each 'pixel' within the measurement square. A pixel might be, for example, a 50m x 50m square. For each pixel, the tool sums the signal strength from each base station to form the total predicted PFD for the pixel. Once this has been done for all pixels it can be determined whether the licence terms have been exceeded - this occurs if there are more than a given percentage of the pixels (e.g. 10 per cent) with a modelled PFD above the agreed limit.
If it is an uplink then more assumptions need to be made since we cannot know where all the mobiles are and whether they are transmitting at any given time. What we do know is the number of base stations operating in receive mode on that frequency and we have some understanding of the uplink capacity of each base station. Knowing this sets a limit on the number of mobiles that can transmit simultaneously for any given data rate. We then select the data rate for a likely representative service (e.g. voice for cellular) and derive an assumed maximum number of mobiles that could simultaneously access a cell assuming a fully-loaded network in the vicinity. Each mobile would then be assigned an assumed power level and distributed evenly around the cell. The interference from all the mobiles can then be derived, either at ground level if the neighbour is using their spectrum as a downlink or at base station level if the neighbour is also deploying an uplink.
All these procedures can be readily automated using modelling tools and macros.
Changing an SUR
If a spectrum user wishes to change the technology that they are going to deploy then the in-band, out-of-band or geographical limits might become inappropriate for two neighbouring spectrum users. The solution to this is for these neighbours to mutually agree changed limits. The neighbours would then approach Ofcom to request a change to their licenses.
We believe that SURs offer a number of advantages over other licensing methods in that they directly control the interference caused, rather than indirectly as is currently achieved. This provides greater certainty to investors that the networks they deploy will not suffer interference problems in the future while still providing the flexibility to change usage. However, as a result, they are somewhat more complex to set and verify than the current mask-based approach.
We have shown how we might initially set the SUR terms, how we would verify that these were being met in practice, and how they might be modified by licence holders if needed.
Our proposals at Ofcom now are to offer SURs as an option in forthcoming auctions and allow licence holders with conventional licences to change the terms of their licence to an SUR if they wish to.
Professor William Webb is head of R&D at Ofcom.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.