Whooshing up the WAN

Optimising the wide-area network has become top priority for thrusting enterprises: resolving latency-not bandwidth-is key.

The wide-areA network (WAN) has become increasingly exposed as a major impediment to overall application performance by trends such as globalisation, Web-enablement of applications, and server virtualisation.

The result has been a boom in tools that mitigate the WAN's defects. The problem is not so much lack of bandwidth, although that commodity can be in short supply to branch offices especially. It is latency, which is depressing at first sight because it would appear little can be done about it.

Fortunately, this is not quite true, because latency across a transmission path has two sources: one imposed by the network routing and switching devices, the other by the speed-of-light in a fibre or electrons in a wire.

Those last two quantities are about the same, equating to a minimum round-trip delay ranging from around 40ms (milliseconds) for transmission within the UK, to 100ms for transatlantic links, rising to 300ms for say London to Australia. Over satellite links the situation is even worse, with round trip delays rising to 750ms because of additional internal latency.

"Latency is the 'silent killer' of applications," says Gilles Trachsel, solutions marketing manager at Juniper Networks, vendor of the WX WAN optimiser range. It is latency that makes the WAN harder to optimise than the LAN, where the principle task is to apportion bandwidth appropriately and ensure that critical real-time applications such as Voice-over IP are given priority.

This has to be done over the WAN too but with the additional issue of handling latency, part of which is imposed by the laws of physics. A round-trip delay of 100ms across the Atlantic might seem acceptable but for two factors operating at different levels - one for transport, and the other within applications.

Latent problems

The TCP (transmission control protocol) used to provide error free end-to-end transport for many Internet and IP processes including email, file transfer, and many streaming media applications operated by transmitting blocks of data and waiting for an acknowledgement.

This amplifies the impact of the round trip delay for sending large files, but the greater problem is the fact that many applications are very 'chatty',as Mark Lewis, senior director for marketing alliances at Riverbed Technology, points out: "There are chatty applications that were originally designed to operate across a LAN, but are now expected to run across the WAN where latency starts to cause issues."

WAN optimisation tools and appliances attempt to mitigate latency in various ways including caching, software replication, and data blocking or prioritising mechanisms under the guise of WAN or application 'acceleration'. Most of the mechanisms operate by slowing other less critical data down, or by avoiding having to transmit over the WAN at all.

Still, it looks like acceleration to the user and some of the techniques have achieved remarkable levels of latency reduction and throughput improvement, even if there is still further to go. The next step may well be to integrate WAN optimisation tools both with the network and the applications.

This is the belief of Adam Davison, EMEA VP of sales at Expand, who contends that integration of WAN optimisation into the network fabric will be one of the major developments in the field over the next two years. "This means the integration of WAN optimisation into routers and switches, the development of software-only and virtualisation solutions, and the bundling of WAN optimisation into managed services offerings," he sees.

For now though, WAN optimisation appliances must cope with the current crop of applications, protocols and network devices. But at least the principle challenges for WAN optimisation have become clear, according to Steve Broadhead, director of Broadband Testing, European independent testing laboratory that has scrutinised several WAN optimisers.

"There are four very obvious issues with WAN traffic optimisation," Broadhead says. "These are visibility, security/manageability, bandwidth capacity, and of course latency."

Visibility is necessary to identify in real-time the traffic flowing across the WAN and the applications that generated it. Security/manageability is needed to control what applications and data each user can access, and allocate WAN resources in terms of bandwidth and quality of service (QoS) accordingly.

Finding room for all

Bandwidth capacity needs to be shared out among applications and users, and utilised as efficiently as possible, with data compression being the traditional vehicle for achieving the latter, followed more recently by various acceleration techniques.

Then while latency, as we have observed, is unavoidable on global networks, at least it can be cut back towards the 'speed of light limit' and there is further scope for reducing the additional components such as those caused by multiple hops in an IP network.

These components of WAN optimisation are interlocking and inter-dependent. For example, it is impossible to manage bandwidth without having visibility over the data and applications that are consuming it. Broadband Testing's Broadhead also highlights the importance of location, so that for example employees working at home might be denied access to certain applications or resources that they would be allowed to access from the office. Similarly, access to some bandwidth-hungry applications might be denied from branch offices, or given lower priority, if only low-speed access links are available, in order to maximise performance for critical tasks.

Expand's Davison argues that WAN optimisation techniques need to be implemented as part of a broad strategy focused on the whole IT infrastructure. In doing so, he identifies two additional critical ingredients - application-specific acceleration, and WAFS (wide area file services). Yet other industry commentators, such as Dave Ewart, senior manager of Product Marketing at WAN optimisation vendor Packeteer, regard WAFS as an umbrella concept embracing a variety of application and data acceleration techniques.

No single solution

Whatever the case, all such techniques are focused prima-rily on the latency problem, which can only be tackled effectively by a co-ordinated approach. The first step for a global enterprise will often be to establish regional data centres, located perhaps in the Americas, EMEA and Asia Pacific, to ensure maximum latency is kept under 150ms. Then various steps can be taken to minimise the number of trips required for each transaction or application.

This, suggests Ewart, should entail a combination of protocol acceleration and caching. Protocol acceleration involves transparently and temporarily swapping a latency-hungry protocol, such as TCP, with a more efficient one.

In Packeteer's case the new protocol called Xpress TCP provides local acknowledgement of data blocks, so that large files are not held up by repeated round trips over longer distances. TCP acceleration techniques like this are particularly advantageous within high-latency environments for large data flows (as in data centre replication), or alternatively in environments where error rates are high.

Caching complements protocol acceleration by ensuring that commonly accessed data is replicated close to users on a more permanent basis, with the WAN then used for synchronisation where performance is less critical.

This can involve replicating between data centres, or pushing certain content out on a more granular basis to smaller offices. In the branch office case, even if latency is not an issue, data may be cached to save bandwidth, which may be at a premium if, say, only relatively low-speed links are available. 

The art of squeezing

Compression was the first WAN optimisation technique to be widely deployed. It remains crucial, and is the predominant mechanism for video. A DVD-quality video stream arrives at a TV set compressed 100 times from its raw post production format, and may be reduced further for delivery over the Internet.

Ewart uses the term Intelligent Acceleration to describe the combination of compression, acceleration and caching that together make up the technical components of latency mitigation. The need for intelligent acceleration has been driven both by application trends, such as the growth in browser based applications requiring frequent round trips, and the general deployment trend towards centralisation and server virtualisation. As we have seen though, centralisation can rarely be executed on a global basis because of latency.

There are other important factors to take into account relating to prioritisation and QoS, driven by changes in the profile of applications. One of the most profound changes is the growing use of the Web for accessing software as a service. This means that business applications can be contending in effect for the same bandwidth as social networking sites that employees might be accessing, as noted by Nigel Hawthorne, VP marketing at Blue Coat.

Block the bandwidth spongers?

One option would be simply to block bandwidth-consuming social networking sites such as Facebook, but most enterprises have come to see this as counter-productive, according to Hawthorne: "Many companies are embracing the benefits these sites bring in terms of networking and knowledge sharing." So instead, enterprises want to manage such usage to ensure business applications are not choked as a result. This requires a combination of visibility, to identify the data and applications, and capacity management, to prioritise accordingly.

The other important change in traffic profile is the growth in use of SSL-encryption, which now accounts for almost 30 per cent of enterprise data, according to Forrester Research. This has become a big problem as encrypted data cannot be prioritised or accelerated if it cannot be identified. Most of the WAN optimisation vendors have now got to grips with SSL encryption, but with varying approaches that work better in some situations than others.

Some - such as Blue Coat - decrypt the SSL traffic in a proxy in order to inspect the contents and apply the usual optimisation techniques, before re-encrypting data for the final leg of its journey. This has the extra potential advantage of making the decrypted SSL data available to intrusion detection systems that are normally unable to apply virus filtering and other security techniques on SSL-encrypted data.

This exemplifies the need to blend WAN optimisation with the whole IT environment, which is a cause fondly promoted by Cisco in its catch-up campaign. "Harmony with the application is critical," says Cisco's application networking services manager Kerry Partridge. "The network is there to support business process and the applications that back them, so if optimisation introduces errors in the application or excessively loads the data centre, then its value is questionable."

Cisco is expending its greatest effort on deploying WAN optimisation to branch and home offices, which increasingly have broadband connections into core enterprise networks. "WAN optimisation to the branch is well into the full adoption phase and therefore is still the most important topic," Partridge says.

The emphasis on the branch has in turn been driven by data centralisation and server virtualisation, which, as Partridge points out, are generating great cost savings for enterprises.

"The strongest and most tangible returns still come from server consolidation and associated cost savings with ROI [return on investment] commonly less than one year," Partridge contends.

In effect then this should help make the ROI business case for WAN optimisation, which to some extent is solving a problem created by centralisation.

Further information

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close