Fatima Gunning, Anrew Ellis and Mary McCarthy of the Tyndall team

The big squeeze

There’s a limit to how much data optical fibres can carry, and it’s looming. E&T looks at what can be done about...

Why do both 2Mbit/s and 20Mbit/s broadband connections stutter rather than stream when delivering Web TV? Because network operators specify the connections from the local exchanges, where they connect to each home, to their own networks, known as the backhaul, to support a sustained bandwidth that is only 100kbit/s for each customer.

Networks are run like this because it is cheaper to install traffic management equipment than to add real capacity. With traffic growth estimated at between 45 and 65 per cent per year and customer complaints rising, it’s a strategy of diminishing returns. Andrew Lord, who models network architectures for BT, claims backhaul capacity should really be 30 to 40 per cent of the access bandwidth in this iPlayer and YouTube age. For a company offering 24Mbit/s broadband, as UK ISP Be does, that’s a rather daunting 7 to 9Mbit/s each.

Digging ditches to bury more optical fibre is expensive, so operators are looking at ways to squeeze extra capacity from the fibres they already have in the ground by borrowing from the modulation schemes used in wireless. These schemes can encode relatively high numbers of bits per second per Hz of available spectrum, using phase states or phase states combined with polarisation states to carry the information.

“It’s the optical equivalent of when dual- and quad-core architectures started to make sense for boosting CPU throughput, rather than just increasing clock speed,” says Robert Griffin, a systems architect at optical component maker Oclaro. “DQPSK modulation, for example, is like dual core in the microprocessor world.”

Respect the law

Claude Shannon of Shannon’s Law fame showed that the information capacity on any communications channel is limited by the channel’s bandwidth and its signal to noise ratio (SNR). But as long as the information is transmitted at a rate lower than the capacity of the noisy channel, error correction codes can be used to cut the likelihood of an error at the receiver to near zero. Communication capacity has grown exponentially during the last 30 years, with the overall capacity of the network closely tracking user demand. But reported capacities in experimental systems are approaching the fundamental limits imposed by SNR and the non-linearity of conventional optical fibres.

In the short term, operators can apply various techniques to increase capacity, including: increasing the total bandwidth of each channel using modulation schemes based on phase-coherent optical multiplexing; optimising SNR through careful link design; using distributed and phase-sensitive amplifiers and optimising the amplifier spacing; and, compensating for intra-channel non-linearity either through link design or signal processing at the terminals. But, according to a recent paper by Andrew Ellis and his colleagues Jian Zhao and David Cotter from the Tyndall research centre in Cork, Ireland, each of these can only increase capacity by a factor of two.

Transmission physics

Today’s networks use a lot of 10Gbit/s links - the 10Gbit/s being the capacity per wavelength of a wavelength-division multiplexing (WDM) system, which might send 100 or so different wavelengths (spaced at 50 or 100GHz) down one fibre at once. The data is modulated onto each wavelength using the laser equivalent of a boy scout turning his torch on and off.  But the physics of optical transmission means that you can’t shunt 40Gbit/s or more down fibres using these simple amplitude-modulated, on-off signals any more.

Using current modulation schemes, 10Gbit/s on-off modulated signals can be fitted into 0.12nm of the optical spectrum. 40Gbit/s signals need four times the spectrum (0.48nm), which makes them unsuitable for use in WDM systems working with 50GHz (0.4nm) channel spacing: it’s like trying to make a comb whose teeth  are wider than the distance between one tooth and the next. Chromatic dispersion, which smears the sharp edges of the pulses of light as they propagate along a fibre, also makes life difficult. If you keep to a simple on-off modulation scheme, the dispersion tolerance reduces with the square of the bit rate, requiring additional dispersion compensation in the links at high rates.

More complex modulation schemes keep the optical spectrum for each signal narrow by, in effect, splitting the signal into parallel paths. For example, phase shift keying (PSK) - the most popular option for future optical systems - allows multiple bits of information to be sent on a carrier at the same time and at the same frequency, using different phases. Each phase state is a symbol that can convey one or more bits of data at a time. Quadrature phase shift keying (QPSK), for example, allocates two bits of data to each of four modulated phases (0, 90, 180 and 270 degrees). In effect, a 40Gbit/s QPSK link uses two signals operating at 20 Gbaud, where the baud is a measure of symbols per second.

Another option is quadrature amplitude modulation (QAM), which increases the number of bits per symbol by using both phase and amplitude to encode them. In the simplest QAM signal, there are two carriers, differing in phase by 90 degrees. The two modulated carriers are combined at the source for transmission.

In both schemes, increasing the number of legitimate phase states increases the number of bits that can be carried per symbol. At the destination, the carriers are separated, the data is extracted from each, and then the data is combined to reconstruct the original modulating information.

All these schemes face a similar challenge: creating signals robust enough to pass through many of the reconfigurable optical add-drop multiplexers (ROADMs), which operators use to configure their networks, by adding or dropping particular wavelengths to their transport fibres. “With more capacity, we need more flexibility,” says BT’s Andrew Lord. “If you have hundreds of thousands of wavelengths, you will want to commission them quickly without too many people getting involved.”

ROADMs put more demands on the spectrum of the signal because each of them can act as a tight filtering stage that can interfere with the increasingly delicate optical signal.

And the winner is...

The Institute of Electrical & Electronics Engineers (IEEE), the International Telecommunication Union (ITU), and the Optical Internetworking Forum (OIF) are trying to define standards for high data-rate optical links. Several schemes have emerged for 40Gbit/s links, including differential phase-shift keying (DPSK), differential QPSK (DQPSK) and dual-polarisation QPSK. The last of these (DP-QPSK) is the preferred option for 100Gbit/s links. The scheme combines polarisation control with phase modulation such that four signals are used in parallel. In effect, two separate QPSK signals are combined at right-angles, giving a 25Gbaud symbol rate (with four bits per symbol) rather than 20Gbaud for DQPSK.

Last year, Bell Labs demonstrated the effectiveness of a variation of the OIF’s 100Gbit/s scheme when it sent 155 simultaneous 100Gbit/s signals 7,000km over a single optical fibre, using polarisation-division-multiplexed QPSK (PDM-QPSK). Using a favoured measure of the optical communications industry that combines data rates and the link distances over which they were achieved, that works out at 111.6 Petabit/s*km. This beats the previous record of 84.3 Petabit/s*km achieved by NTT using 135 signals at 111 Gbit/s/channel.

There is, of course, no such thing as a free lunch. In this case, the gain in spectral efficiency demands more complex optics. Modulators have to have multiple stages and, to keep things small and manageable, the components ideally need to be integrated.

Oclaro, for example, is showing customers samples of a very compact 40Gbit/s DQPSK transmitter, based on a modulator built with indium phosphide (InP) in its UK fab. The modulator has three stages: a pulse carver section, which carves a pulse train into the continuous-wave laser carrier signal at the clock frequency (20GHz); and two parallel Mach Zhender lasers, which apply data to that pulse train.

The modulator is only 9mm long and the assembled transmitter unit, which contains a tuneable laser in the same package as a modulator chip, is just 20 per cent bigger than the laser would be on its own. “The building blocks of DQPSK are the same as those required for coherent approaches, so there’s scope to further extend the technology to 100Gbit/s,” explains Oclaro’s Robert Griffin.

Receivers are also more complex with these advanced modulation schemes. For 100Gbit/s systems, coherent receivers will be necessary that operate by mixing a local oscillator with the incoming signal, providing in-phase (I) and quadrature (Q) outputs for each of two polarisation states. If the local oscillator is tuned into the frequency of the incoming signal, then only the information from the incoming signal is extracted, and neighbouring channel information is ignored.

Sophisticated digital signal processing can then extract the transmitted data from the complex received signal, as well as unwinding the effects of chromatic dispersion. 

Alternative approaches

While the trend seems to be towards more complex modulation schemes, there are some alternative approaches being proposed as an interim stage.

A team at Cambridge University, for example, is proposing a method based on orthogonal amplitude-shift-keyed multiplexing that uses a mixture of NRZ and Manchester coding (which have slightly different spectra) to increase link bandwidth over single-mode fibre, to 20Gbit/s over 20km, and 40Gbit/s over 10km.

Most of the techniques discussed so far are designed to work over longer distances, but the Cambridge group says that its scheme would be perfectly suitable for a small country like the UK where the link distances are often quite short.

One of the most promising other ideas comes from the Photonic Systems Group of the Tyndall National Institute in Cork, Ireland, which has developed ‘coherent wavelength division multiplexing’ (CoWDM).  This increases spectral efficiency by allowing the channels of a WDM optical transmission system to be packed more closely together. In contrast with other approaches, CoWDM greatly improves the amount of information that can be transmitted for a given optical bandwidth (its spectral density) without the need for complex modulation formats. In the lab, the group has transmitted 1.5Tbit/s over 80km of standard single-mode fibre (SSMF) with an overall spectral density approaching 1bit/s/Hz, and dramatically increased the capacity per wavelength, as shown by Tyndall’s demonstration of 280Gbit/s transmission over 1,200km of SSMF with Orange Labs in 2007 - a record data rate per wavelength at this distance. Last year’s NTT record experiment used the same technique proposed by Tyndall, without the phase control.

The Tyndall group is unique in pushing towards using phase coherence as the first step to achieving the best performance for both lower and higher-complexity modulation schemes, while changing only the terminals of a given transmission link.

WDM usually uses one laser per wavelength channel (or sub-carrier). The Tyndall technique turns a single laser into an optical comb generator, where one laser generates several identical coherent wavelength channels (they’ve generated up to 11 in their experiments). By offsetting the phase of adjacent channels by 90 degrees, they’ve found that it is possible to have narrower spacing between the wavelength channels than the industry norm of 50 or 100GHz, while maintaining good performance.

In standard WDM systems, the frequency and phase of a laser’s output changes over time, so if there are many independent lasers, with wavelengths operating close together, they can gradually drift on to each other’s territory, interfering to create beat frequencies that are interpreted as system noise. The wider the channel spacing (that is, the greater the difference in wavelength between subcarriers), the smaller the beat signal.

In CoWDM, the optical comb generator creates adjacent sub-carriers that have a well-known phase relationship, and so it is possible to minimise the interference as the wavelength channels are brought closer together. The Tyndall group has found that, by encoding each sub-carrier so the data rate matches the frequency spacing (so if the spacing is 40GHz, the data is encoded at 40Gbit/s), the frequency of the data and the frequency of any beating signals remains fixed and can be used to minimise the impact of crosstalk. This is achieved by careful time alignment and phase control between the sub-carriers.

Eye on the prize

“It means that we can guarantee that all the crosstalk terms (random beating) can be pushed over to the crossings of the eye diagram,” says Dr Fatima Gunning, senior researcher. “The result is a clear ‘eye opening’ in the middle where the signal is sampled by the detector, so there is always a clear distinction between a zero and a one. The crosstalk is still there but of little impact at the receiver, even in the absence of matched filters.”

Most of the group’s demonstrations have used a simple NRZ (non-return-to-zero) on-off modulation scheme, with all the subcarriers at the same polarisation. “By using polarisation multiplexing (that is, sending a band of wavelength channels at one polarisation and another band at 90 degrees) the efficiency can be increased by a factor of two. If higher-order modulation schemes such as DQPSK are implemented, the efficiency could be increased by another factor of two again,” says Dr Gunning.

All this work is driven by rising demand for bandwidth, but operators can’t beat the laws of physics. For instance, as the number of bits represented by each symbol is increased, the bit error rate increases requiring increasing amounts of forward error correction that eventually outstrips the additional capacity offered.

Inevitably, as Tyndall’s Andrew Ellis and colleagues show in their paper, unless capacity demand saturates, or network architectures are devised that radically alter the demand for capacity, more fibre will be essential within the next two decades.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close