vol 7, issue 3

Internet traffic jams: how to avoid them

27 March 2012
By Philip Hunter
Share |
Internet traffic jams graphic

The Internet needs to be rethought to cope with massively escalating traffic volumes

Wires - multi-coloured

Trilogy Project’s objective is to re-architect the concept of how ICT interconnects

Cisco Catalyst 6513

High-end routers - such as Cisco’s Catalyst 6513 - have been designed to help manage escalating data volumes

The system of Internet highways and byways is being reconfigured to cope with the size and shape of traffic heading over it, while Internet companies are dreaming up fresh approaches to avoid

It's an old story with a new twist. The big fret among users was once the idea that the Internet would run out of bandwidth before the end of the 2000s, or that its IP address space was soon to be exhausted. Not every techno-prophet subscribed to these notions, but most agreed that the somewhat haphazard fashion in which the Internet was built-out in the 1990s and 2000s did not take account of the traffic demands that it faces in the 2010s.

The IP address availability issue rumbles on still, as the remedy - migration from the legacy IPv4 protocol to IPv6 - is only just getting going in most regions. The core bandwidth issue - that is, whether the existing Internet infrastructure can manage with the proliferating volumes of traffic being loaded onto it - has largely been sorted out by advances in switch/router technology and ingenious innovations by the networking companies for obtaining greater capacity from the available routes.

This is despite the accelerating pace of growth in Internet traffic generated by video, and generally by the increasing number of large unstructured files shunted around. This is causing many Internet strategists to reconsider how the Internet of the future should be planned.

The Internet was, of course, constructed with traffic management in mind: routers constantly check the available routes to an IP packet's destination, and send it on its way via the route of least contention. They often send packets the long way round, where the shortest path appears congested with previously-sent packet traffic.

"Lack of raw bandwidth is unlikely to be a gating issue, especially as '100G' becomes the new currency in the backbone of IP and optical networks," says Houman Modarres, senior director of marketing at the IP Division of Alcatel-Lucent. Fibre networks existing or being deployed will have plenty of capacity to cope with further increases in traffic volume with the help of a technique called dense wave division multiplexing (DWDM), which enables single fibres to carry multiple channels, each encoded at a different wavelength, up to a maximum of 160 at 10Gbps each at present.

This adds up to an aggregate bit-rate of 1.6Tbps per fibre, and given that there are laboratory demonstrations of up to almost 1,000 40Gbps channels, this is set to increase another 25 times or so over the next decade. At the same time vendors are looking at stepping up from 100Gbps to 400Gbps over single-fibre channels.

The changing profile of data, with the growth in large unstructured files and online video, poses a new challenge of timely reliable delivery, irrespective of what theoretical bit-rate is available. The danger is of the Internet choking on these large files or video streaming sessions, and thereby failing to meet requirements for latency and'mandated Quality of Service. For live video streaming, delayed packets are as worthless as dropped ones, resulting in flaky picture quality with no scope for retransmission.

The changing nature of Internet traffic is creating issues not in capacity, but performance and ability of the core switching fabric to cope with the anticipated level of routing within the Internet. For this reason the major Internet infrastructure vendors such as Cisco Systems, Juniper Networks, and Alcatel-Lucent are still investing in their core routers and switches, and are keen to draw attention to the scale of the expected traffic deluge.

In February 2012 Cisco published its latest Global Visual Networking Index (VNI) focusing on the growing impact of mobile data, which it predicts will increase by 18'times between 2011 and 2016, with smartphone traffic rising by 50 times and tablet traffic by 62 times. The number of Internet-connected devices will soar to 50'billion by 2020 according to some estimates, as the 'Internet of Things' establishes itself (see box, p41). This gives rise to another issue: how to handle a vast number of small data transmissions, more particularly at the edge of the network, according to Dominic Elliot, solutions architect at Cisco.

"Edge devices are increasingly called on to deal with large amounts of small transmissions associated with signalling or data bursts," Elliot says. "It is essential that as we scale the network and service elements they are capable of meeting the changing nature of the Internet traffic profile." This will require increased CPU capability within the network, optimised to deal with these numerous small bursts of data.

In the immediate future the greater challenge lies in handling the growing movement of large unstructured data files between two or more sites, according to Steve Broadhead, director of vendor-independent testing organisation Broadband-Testing. Such files include video and high-resolution digital images, and these cause issues regardless of the bandwidth of the connection. In some cases there is no hurry to deliver the file, but in others, including streaming video, latency is a big issue.

According to Broadhead: "The problem here is that traditional data transfer protocols such as File Transfer Protocol (FTP) are designed neither to take advantage of large data pipes, nor to resolve latency issues. So, for usage such as data-centre-to-data-centre transfers, or in specialist industries such as medical and geophysics, where they need to transfer digital images as fast as possible between two points across the globe, there is a real problem to be solved."

WAN optimisation boost

As Broadhead also points out, this has given renewed impetus to the WAN optimisation business, with emerging vendors such as Talon Data and Bitspeed creating technology to optimise these large data transfers at connection speeds up to 10Gbps. The assumption now is that the bottleneck lies in the network's ability to deliver all the packets within an allotted time frame rather than the raw bandwidth itself. The focus is on judicious use of cache storage within the network, along with coupling between the source and destination, to minimise the extent of packet retransmissions. Talon Data Systems, for instance, uses a proprietary technique on top of the Internet's Transport Control Protocol (TCP), using a combination of buffering and interaction between the sender and receiver to keep careful track of packets dropped during transmission, and retransmitting as required as fast as possible.

Bit Speed's system, called Velocity, is an interesting case in that it splits file transfers into parallel streams to make full use of the network's full capacity, while TCP by itself transmits over a single path. This is similar in principle to the Multipath TCP developed by the Trilogy Project (see box, left) that may well become an integral part of the IP protocol stack supported right across the Internet, in due course.

IPv6 migration issues

The much bigger change regarding the IP protocol stack though is coming with migration from IPv4 to IPv6. IPv4 emerged as the Internet's first protocol in the 1970s, using a 32-bit addressing scheme, which meant that, in theory, it could only support about four billion connected devices. In anticipation that this address space would become exhausted as the Internet went public in the 1990s, IPv6 was developed, doubling the address space to 64 bits, which is far larger than will ever be required to support the 'webosphere', while bringing various efficient improvements that exploit the increased number of bits in the header to accelerate routing calculations.

However, techniques such as Network Address Translation (NAS) enabled the dwindling stock of IPv4 addresses to sustain operations for longer than expected, by allowing multiple computers and other IP devices to share one IP address. As a result, only about 1 per cent of Internet packets are IPv6, and only 0.15 per cent of the top million websites are accessible via the new protocol.

The acceleration in growth of IP-connected devices engendered by the Internet of Things and online video services has brought matters to a head, and from this year (2012) migration from IPv4 to IPv6 will gather pace.

IPv6 will help cope with the Internet traffic explosion, having itself evolved since its inception in the 1990s to incorporate better support for mobile data and video, with mechanisms for Quality of Service and a simple method of supporting roaming devices. It caches a device's home address alongside the temporary address as it roams, enabling IPv6 packets to be readily routed to the correct node. Under IPv4, this was bolted on, with varying implementations.

Complexity comes with cost

As Broadband-Testing's Steve Broadhead notes, while on the whole IPv6 will make routing more efficient, it does increase the immediate packet processing overhead since the header is bigger: "Carrying all this information will certainly put more strain on existing router/switch-based architectures that are IPv6 compliant, but were designed for use with IPv4." Although IPv4 and IPv6 are incompatible, leading IP switching vendors such as Brocade have come up with tools to help support migration and enable a degree of interoperability during the process.

"Our ServerIron ADX switches can accept IPv6 requests arriving from IPv6 clients, and translate them into IPv4 requests'for internal hosts that do not communicate in IPv6," notes Pavel Radda, marketing manager at Brocade. "They can also insert the original IPv6 client IP addresses so that IPv4 hosts can use that information when required." Given that migration could be a lengthy process, such ability to make use of IPv6 information could prove valuable. Eventually IPv6 will become predominant, and be supported by all hosts, routers and switches: then the migration issue will fade away. However, the need to continue improving the Internet's switching fabric will never go away for, as recent experience has shown, it has to cope not just with continual increases in traffic data volume, but also ever greater unpredictability and changes in the profile of the files and sessions that it needs to handle.

While the network will cope in terms of raw bandwidth, there is a danger that costs of managing the extra complexity could escalate, unless there is continuing innovation and integration of higher-level intelligence into core routing and switching products. "To avoid breaking the economics of core network provisioning, service providers must extract every bit of cost from their core networks without compromising services or reducing quality of experience for their end users," says David Noguer Bau, head of service provider marketing at Juniper Networks. "Success hinges on finding an economical, more scalable, and more efficient model for building and maintaining core transport networks."

This will require radical changes in approach, right down to the core ASICs (application specific integrated circuits) at the heart of the systems, according to Bau, who referred to Juniper's Junos Express chipset designed to address the challenges of scale as well as speed and cost.

Yet, while such developments are important, there is also, as Cisco's Elliot acknowledges, a shift in emphasis away from a focus on the capability of the underlying switching fabric towards intelligent routing of data at a higher content level, making optimal use of caching within the Internet to overcome bottlenecks and ensure timely delivery of IP packets. "Most providers are now striking a balance between careful capacity planning of their core and content optimisation techniques," says Elliot.

There is one thing though that is beyond the capability of any vendor to change, and that is the speed of light. This imposes a fundamental constraint over the ability of service providers to meet demands for low latency for real-time applications.

As Brocade's Radda agrees, service providers and vendors alike cannot alter this fundamental physical law, but can only do their best to accommodate it by distributing content in caches and ensuring that switching and routing functions add as little as possible to the overall delay.

"Carriers are addressing this by not only being geographically closer to the points they need to deliver data to, but through ultra low latency equipment," says Radda. Further progress by the vendors could shave useful microseconds over round trip latencies, but the major contribution will be made by higher level measures involving caching and intelligent content distribution. In order to maximise efficiency and minimise costs, more of the intelligence required to manage content distribution will be embedded into the network infrastructure.

Further information

Share |

EU/ietf initiative: The Trilogy Contribution

The Internet Engineering Task Force (IETF) has turned to an EU project to help tackle congestion caused by proliferating traffic from video and other unstructured data. The three-year, €9.2m project finished in March 2011, but left behind one significant contribution, the Multi-Path Transmission Control Protocol (MPTCP), which has been taken up as an Internet Draft by the IETF and is due to become a major component of the Internet.

TCP is the transport protocol of the Internet used for applications requiring guaranteed delivery of data, such as email and file transfer; but until now TCP has operated along single paths set up for the duration of a session, which has become a handicap in the modern Internet where many potential routes often exist between sources and destinations. TCP fails to exploit these multiple routes for balancing traffic across the network.

MPTCP, developed under the Trilogy Project, rectifies this omission by enabling data to be transmitted from one network node to another via multiple paths at the same time. To do this efficiently, it uses an algorithm that firstly determines whether there are multiple paths available, and then gives each path a score on the basis of its level of congestion at the time.

Given that the aim is to deploy MPTCP across the whole Internet, there are other considerations, one being that it must operate transparently without need for any modification, appearing just like a normal end-to-end TCP connection.

Second, there are security considerations, because MPTCP works by using multiple addresses for each path, and these can change during a TCP session. This opens up the session to new vulnerabilities and attacks that exploit this ability to change addresses, and this is currently being studied and documented by the IETF.

Emerging online standards: Lighthouse Project: The Internet of Things

While most attention has focused on the impact of rising traffic volumes and large unstructured files, another issue hustling up the interconnection agenda is the so-called Internet of Things, or machine-to-machine (M2M) networking. This has had plenty of coverage, but almost all relating to applications or dealing with immediate access via wireless or satellite, with less attention to the impact on core networks - possibly because the data volumes generated by the Internet of Things will be relatively small compared with online video - and so it has been assumed that the core network will take that in its stride, as it were.

Even so, the sheer number of transactions will cause problems, even with the help of current WAN acceleration techniques. By 2020 it's probable that there will be some 50 billion devices connected to the Internet, perhaps a lot more depending on which analyst you believe, but all surveys seem to agree that by then well over half of these will be things rather than computers.

There is an issue of definition here in that currently an Internet-connected thing is defined as some physical or biological entity like a container, food package, pacemaker, or even an animal, that does not itself have on-board processing ability. There is the related category of machine-to-machine communication, where - for example - an ocean buoy might deliver data about the sea temperature and salinity to an environmental monitoring station. With further increases in ultra-small processing chips, in time these things may all also have significant computational ability, so a better definition would relate to the applications, which are largely tracking and monitoring at present, although even that may change.

Whichever way it evolves, the Internet'of Things will generate massive numbers of transactions and polling operations that will impose a strain on'the'Internet's switching fabric. Such'factors are now being considered by'the European Union's Lighthouse Project, which is developing an architectural reference model for the Internet of Things, along with a set of key'building blocks.

At present there is no universal global Internet of Things, just various closed application deployments, for example in container tracking, which are therefore perhaps more accurately called 'Intranets'of Things'. The Lighthouse Project running until August 2013 aims to create a platform bringing these Intranets together and stimulating the whole field of connected Internet-enabled devices, with networking and communications a key ingredient.

Related forum discussions
forum comment To start a discussion topic about this article, please log in or register.    

Latest Issue

E&T cover image 1606

"Where would Frankenstein and his creative mind fit into today's workplace? Should we fear technological developments or embrace them?"

E&T jobs

  • Graduate Electrical Engineers

    AECOM
    • United Kingdom and Ireland
    • Competitive

    Due to the diverse nature of our business there are many different teams each with very different responsibilities.

    • Recruiter: AECOM

    Apply for this job

  • Network Innovation Engineer / Analyst - UK Power Sector

    Premium job

    Nortech Management Ltd
    • Birmingham, West Midlands or Pershore (Worcestershire)
    • £30,000 - £35,000 (depending on experience) + benefits

    Network Innovation Engineer / Analyst to join a team of talented technology enthusiasts who design and support the low carbon networks of the future.

    • Recruiter: Nortech Management Ltd

    Apply for this job

  • Electrical Engineer with Strong telecoms background

    Premium job

    Sure South Atlantic Ltd
    • Falkland Islands

    Sure South Atlantic Ltd currently has a unique engineering opportunity in their Falkland Islands office. Surrounded by the Atlantic Ocean, teeming ...

    • Recruiter: Sure South Atlantic Ltd

    Apply for this job

  • Cyber, Communication, Information and Data Scientist roles

    Premium job

    Dstl
    • Porton Down, Salisbury
    • Competitive salaries

    Information is everything. Use it to serve your country and help keep us safe.

    • Recruiter: Dstl

    Apply for this job

  • Production Engineer

    Premium job

    Compact Engineering
    • Thirsk / Leeds / Banbury / Colchester / Cambridge
    • Salary will be competitive and commensurate with experience, knowledge, aptitude and capability

    A Production Engineer with some knowledge and understanding of radiant energy transfer.

    • Recruiter: Compact Engineering

    Apply for this job

  • Electronics Engineer

    Premium job

    Nikon Metrology Europe
    • Tring, Hertfordshire

    Nikon Metrology is looking for an Electronics Engineer to join our Electronics Team based in Tring (UK).

    • Recruiter: Nikon Metrology Europe

    Apply for this job

  • Engineering Manager

    BAE Systems
    • Hampshire, England, Portsmouth
    • Competitive package

    Would you like to play a vital role in managing and implementing the correct governance in order to enable BAE Systems to provide assurance and integrity of supply chain data? We currently have a vacancy for an Engineering Manager - Product Integrity

    • Recruiter: BAE Systems

    Apply for this job

  • Engineering Project Manager - Electrical & Automation

    Nestle
    • York, North Yorkshire
    • c£45,000 + Car Allowance + Bonus + Excellent Benefits

    Nestlé Product Technology Centre in York currently has an excellent opportunity for an Engineering Project Manager

    • Recruiter: Nestle

    Apply for this job

  • Consultant Engineer - Test

    BAE Systems
    • Farnborough, Hampshire, England
    • Negotiable

    Consultant Engineer - Test Would you like to be a lead within an exciting team working on one of the UK's largest defence projects? We currently have a vacancy for a Consultant Engineer - Test at our site in Ash Vale. As a Consultant Engineer - Test, you

    • Recruiter: BAE Systems

    Apply for this job

  • ELECTRICAL PROJECT ENGINEER

    SSE
    • Reading, Berkshire
    • SALARY: £37,588 TO £55,669 + CAR (SSE8/9), DEPENDING ON SKILLS AND EXPERIENCE

    SSE are looking to recruit an Electrical Project Engineer into office in Reading

    • Recruiter: SSE

    Apply for this job

More jobs ▶

Subscribe

Choose the way you would like to access the latest news and developments in your field.

Subscribe to E&T