Cables

Pushing cable capacity to the max

How far can communications cable wire-speeds be pushed? We look at how the theoretical limits of the past are being exceeded due to new bit-rate acceleration techniques.

The Increases in network bandwidth over 25 years of public data communications have been every bit as extraordinary as those in storage density or processing power as enshrined by Moore's Law. Indeed, according to that law, which predicts a doubling in performance every two years, maximum data speeds would have increased some 4,000-fold between the time of the first generally available online services in 1987 and 2011.

The fastest dial-up modems in 1987 ran at 9.6kbps, while now there are examples of services providing broadband access over the same copper infrastructure at 50Mbps – about a 5,000 fold increase. The average speed increase is greater still, from 2.4Kbps in 1987, to 9.4Mbps in western Europe by the end of 2009, according to technology analyst IDC, with Sweden doing best at 16.7Mbps.

However, increases in speed do not necessarily indicate a new baseline from which additional boosts will proceed. The bigger question in the research and development context is whether this rate of increase will be sustained. IDC thinks not for the moment, predicting just a threefold increase in average speed to 27Mbps by 2014 in western Europe.

At the same time, several caveats should be made. First, these are nominal average speeds, not reflecting the performance actually delivered to the consumer, which is often considerably slower, especially over copper cables. Second, increases in bandwidth occurin spurts. For example, the arrival of DSL (digital subscriber line) technologies over a decade ago provided a huge boost, which was followed by a slowdown as the technology built out, and demand accumulated.

We could be poised for another growth spurt, at least according to Alcatel-Lucent, whose Bell Laboratories is working on some of the technologies that would underpin a significant surge in bandwidth. This appears to contradict IDC's relatively modest expectations, but this may also be just a matter of timing. To unravel this dichotomy, we have to roll back three years to a time when the data communications world caught a severe bout of fibre fever.

Getting enough fibre

Fibre has inherently greater data capacity than copper, and will always deliver much higher bit-rates, largely because the signals attenuate more slowly and are not subject to electromagnetic interference. Therefore it was thought that the only long-term solution for super broadband was to take fibre right up to every home.

The Fibre To The Home (FTTH) Council, in its heyday in 2008, met every year to plot this bandwidth utopia, but reckoned without the cost and logistical problems posed not so much by the last mile, but the final hundred-metre dash. However, the FTTH Council did establish some worthy goals for the industry and the market to aim at, although some unforeseen factors played a part in shifting the goalposts.

The economic downturn has slowed the rates of fibre deployment, but the main adverse factor has been that the task was underestimated; not just the physical work, but the acquisition of planning permission, and decisions over who would be footing the bill. It is this factor that lies behind IDC's rather cautious forecast.

'The time and cost involved to run fibre all the way to the home mean copper will remain a key infrastructure in western Europe for the foreseeable future,' says Jan Hein Bakkers, Research Manager at IDC. 'A recent analysis I did shows that, by 2014, 63 per cent of homes in western Europe will still use a copper connection for either voice, broadband or TV, compared with 75 per cent in 2010.'

Speed-enhancing algorithms

This means that, for the immediate future, speed increases will be highly dependent on technology advances in copper transmission, albeit combined with continued build out of fibre to bring it closer to the customer premises equipment (CPE). This is where recent advances have given cause for optimism, with the real prospect that the combination of new algorithms and shorter distances will deliver gigabit access speeds over copper within a few years, just as has been possible for a decade over short runs within Ethernet LANs via the IEEE 802.3ab standard.

Indeed, more recently 10 Gbps over copper has been achieved using electrically shielded systems at up to 100m, depending on the grade of cabling. Over the access network, longer distances and lower-grade cabling make it much harder to deliver high speeds over copper than within the LAN.

The main limitations regarding this technology are signal attenuation, and also crosstalk, which is essentially induced electrical interference between neighbouring copper wires. High data-rates require greater signal frequency to push information more quickly into the wire, and this in turn amplifies the effect of both attenuation and crosstalk. Twisting the wires in the first place reduces the effect of crosstalk, and recently increasingly sophisticated techniques for eliminating the remaining crosstalk have been developed.

The first major advance in DSL transmission came with VDSL and its successor VDSL2, which involved shortening the length of the copper loops to 1km or less, increasing speeds generally into the range 20Mbps to 30Mbps over that distance. 'To achieve these shorter loop lengths, the operators will move the DSLAMs (digital subscriber line access multiplexers) – at the boundary between the copper access and fibre backhaul network – into street cabinets within a radius of 400m to 800m of the homes, giving everyone a maximum loop length of 800m,' says Stefaan Vanhastel, director of product marketing for wireline networks at Alcatel-Lucent.

The next step for many telecommunications operators is to then apply bonding – or inverse multiplexing – to combine two copper pairs in a single connection to double bit-rate, achieving speeds of 40Mbps to 60Mbps; but this technique assumes that there are two pairs available. 'Quite a lot of countries do have two pairs available as homes used to have two telephone lines,' Vanhastel notes.

As it happens the first VDSL2 bonding deployment in 2010, by AT&T in the US, was used to extend the existing 20Mbps to 30Mbps VDSL2 bit-rate to subscribers further from the DSLAM, rather than to double bit-rates, highlighting the role new technologies can also play bringing more people up to speed.

Vectoring

Following VDSL bonding comes vectoring, which cancels cross-talk interference almost completely. Copper pairs are usually housed in binders containing perhaps 25, creating the scope for crosstalk. Vectoring works by measuring the input signals to each pair and using this to calculate the interference generated and in turn creating a cancellation signal that wipes out the crosstalk.

This leaves a clean signal that enables the power to be reduced, rather like wearing headphones in a noisy environment allows the volume to be turned down. 'Just like bonding, vectoring can double the bit rate but with the advantage that you don't need a second pair,' says Alctel-Lucent's Vanhastel.

The disadvantage here is that the calculations are so complex that they can only be performed across a limited number of pairs, which rules it out for large central offices (formerly 'exchanges'), according to Vanhastel's colleague Michael Peeters, CTO of Alcatel-Lucent's wireline division. 'You will only see vectoring on relatively small nodes with a maximum number of lines about 200, which is perfectly suited to a street cabinet deployment where you have 48, 96, or 192 lines,' Peeters says.

A number of operators are evaluating vectoring, including Turk Telekom, which is also interested in Alcatel-Lucent's next step forward, called DSL Phantom Mode. This is based on a very old discovery applied in the DSL context, exploiting the underlying electrical transmission technique of twisted pairs, which generates positive voltages in one wire and negative in the other, then detecting the difference between the two at the receiving end to extract the digital bit value – this could be positive for a nought and negative for a one.

In phantom mode, again two pairs are needed, each transmitting its own signal as normal; but now a third voltage is superimposed upon the signal, but this time operating in one direction across both wires of one pair, and the opposite direction across both wires of the other pair.

In this way each whole pair operates like a single wire, enabling a third digital bit to be sent at the same time. Given that this signal is transmitted down both wires of each pair it does not affect the voltage difference measured at the receiving end of that pair; but there is a difference between the voltage across the two pairs.

There is a problem, however, in that Phantom Mode creates additional interference, but Alcatel-Lucent has tackled this by applying its vectoring to cancel all the crosstalk. Bonding is then applied to unite all three channels – the two pairs plus the third 'phantom' pair – into a single circuit. Therefore phantom mode combines three techniques – vectoring and bonding as well as the phantom effect itself. Phantom Mode has been demonstrated over loop lengths of 400m, which at that distance can deliver 100Mbps per pair with the help of vectoring. Phantom Mode then multiplies that by three across the two pairs, to get 300Mbps.

With crosstalk virtually eliminated through vectoring, signal attenuation remains the main barrier to further improvements in copper performance. The remedy then is to shorten the loop length further to less than 300m, but in order to extract higher bit-rates it is necessary to extend operating frequencies beyond those defined in current physical layer standards.

'When the current DSL protocol was defined in 2003, certain choices were made with respect to coding,' says Peeters. 'Since then coding has advanced and you could increase capacity but a new physical layer is needed and that is being worked on. We are now specifically targeting 1Gbps over sufficiently short loop-lengths.'

Wirespeed overkill?

Of course 1Gbps is far more than most users need, but there is reason to assume that applications will emerge to gobble it up, perhaps in 3D gaming. In that case capacity has to increase in the backhaul and core networks as well, or else the bottleneck will merely be transferred there from the access layer. This puts the spotlight on fibre, which will have to expand in capacity as well. That is happening at different levels, ranging from core or trunk transport, to the access node where it meets copper.

At the trunk level over long distances the high cost of laying fibre combined with the greater capacities required makes it worth having the best system, which is narrow single mode fibre only 8µm to 10µm in diameter. Being thin single mode requires high-powered lasers to inject the light, but can then transmit coherent light at a single wavelength, which attenuates more slowly and can therefore sustain higher data-rates over longer distances. Furthermore, there is the ability for a single fibre to carry multiple channels each at different wavelengths separated by a clear margin to avoid dispersion, using dense wave division multiplexing (DWDM).

Fibre transmission record breaker

In 2010 NTT of Japan broke the speed record for a single-mode fibre by transmitting signals over 432 separate wavelengths under DWDM, at a rate of 171Gbps per wavelength, making a total of 69.1Tbps. Then in December 2010 Alcatel-Lucent raised the bar further by announcing transmission at 700Gbps over single-mode using one optical channel at a single wavelength. With DWDM this brings the potential data rate up to about 240Tbps over a single fibre – a throughput potential that provides plenty of headroom for expansion of trunk networks.

When it comes to the access network, and the point where the fibre connects with the final copper loop, we are talking about much lower speeds using thicker multimode cable at 62.5µm diameter. This absorbs light more easily allowing use of lower cost photonics systems with PON (passive optical network) technology. Speeds of around 2.5Gbps per fibre have been achieved so far, while recently US carrier Verizon demonstrated PON at 10Gbps.

This increases the potential access speed further and will ensure fibre stays ahead of copper, but there is still an important point to remember. This is that even when FTTH (Fibre To The Home) is deployed, the last few metres to the customer premises equipment such as TVs or computers will still be over copper, and there may also be a wireless link. This is because fibre is difficult to splice, and there are very few consumer electronics devices taking it as a direct input.

Therefore wireline speeds will continue to be constrained both by fibre in the core, and copper over ever shorter loop lengths, until the point at which they come down to the length of the in-home cabling over the last few metres. As fibre speeds increase, so the diminishing length of the copper loops will allow that too to support higher bit-rates. Within the foreseeable future it is possible that as fibre reaches the end of the street or block to within 50m or 100m of the CPE, access speeds over copper will reach 10Gbps. *

Further information

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close