Virtual time waits for no one

Speed equals distance over time, and timing is everything in data centre server virtualisation - particularly where real-time application performance is concerned.

With an operating system, applications and processes packaged into multiple virtual machines (VMs) all sharing the same underlying hardware resources on a single physical computer, allocating CPU cycles, memory, bandwidth and I/O functions to one VM without adversely affecting the response time of applications running on another is tricky, to say the least.

The issues with the performance of software hypervisors have long been understood, and have led silicon and application manufacturers to find new ways of speeding things up by addressing bottlenecks, particularly at the I/O (input/output) level, and within the network interface card (NIC).

Some of the latest Intel and AMD CPUs and chipsets support Intel's VT-D and AMD's AMD-V extensions. These are designed to directly assign I/O resources to a VM, as well as isolate I/O activity between different VMs, taking on some of the work previously done by the hypervisor - thereby eliminating some of the latency that can affect the response times of applications running on VMs.

"It assigns a VM directly to the device, meaning we can support multiple VMs from a single device," says Tim Mueting, AMD's product manager of virtualisation solutions. "It's about device isolation, and providing a secure mechanism from a hardware perspective to prevent errant read/writes, establish protection domains, and assign memory tables to those domains."

Both Intel's and AMD's technologies support the single root I/O virtualisation specification (SR-IOV) developed and managed by the PCI special Internet group (PCI-SIG). SR-IOV is the extension to PCI Express (PCIe) specification; it is designed to enable multiple VMs to directly access and share a PCIe interface card's I/O resources, providing I/O address translation services, and handling interrupt remapping mechanisms.

SR-IOV and MR-IOV

Support for SR-IOV has so far been written into some - but not all - virtualisation software including VMWare and Xen-based platforms (such as Citrix, Novell, Red Hat), as well as NIC device drivers.

"The hypervisor and client device driver has to have underlying support for SR-IOV and that is a more difficult long-term activity," reckons Alan Priestly, virtualisation marketing manager at Intel.

SR-IOV also has limited capabilities for the live migration of VMs from one physical server to another, crucial for application load balancing and disaster recovery purposes, which hardware and software makers are also trying to work around.

To address this problem, and to help VMs handle more I/O intensive workloads, Intel is also looking to shift packet inspection and routing for network traffic into the PCIe NIC, adding control systems, and buffers, onto silicon in order to sequence and prioritise the timing of data packet transmission to increase throughput, especially on servers hosting 20 or 30 VMs simultaneously.

"The NIC has a set of multiple pipes on it which enables the hypervisor to assign parts of the [physical] machine. We use various protocols to sequence data flow, and prioritise traffic, or just have a round robin approach [to transmitting packets to or from multiple VMs]," Priestly explains.

SR-IOV means that VMs can make full use of the bandwidth available on high capacity Fibre Channel over Ethernet NICs, for example.

"A VM using a 10GbE FCoE NIC can only use around 2.5Gbps of that capacity," Intel's Priestly adds, "but with hardware-assisted virtualisation that can be boosted to nearline speed of about 9.6Gb on multiple Ethernet channels."

While SR-IOV is still in the early stages of adoption, the PCI-SIG is working on the next step for I/O virtualisation: how to share the resources of a PCIe NIC between different physical servers, a move that will help with the consolidation of costs, power, and space in data centres.

The bandwidth on a shared NIC can be dynamically provisioned to meet the application workload requirements, or multiple VMs running on different blade servers, and no external network switches would be needed because a virtual switch would be present on the PCIe card itself.

The issues around multi-root I/O virtualisation remain complex, though - not least with respect to how best to automate I/O resource assignment on demand, support routing across the virtual interconnect infrastructure, and enforce quality of service (QoS) through the virtual stack itself. For the moment, and despite the efforts of companies such as NextIO and Intel, MR-IOV, may be a technology before its time as far as data centre managers are concerned, and could actually end up competing with other high bandwidth server interconnect technologies like FCoE.

"The standards for MR-IOV based with the PCI SIG have not yet been written or published and I have not seen any plans yet where MR-IOV is even on a roadmap," admits Mueting. "As we see more customers running larger database applications and other I/O heavy applications, we might see more of those requirements [for MR-IOV] coming into the market."

Further information

Systems Engineering Network [new window]

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close