Isilon boosts clustered storage with I/O accelerator
The emerging clustered storage battle has heated up with Isilon boosting CPU speed and I/O performance, hoping to repel big rivals such as IBM, NetApp, and (HP-owned) EMC.
Its new package includes the fifth-generation of Isilon’s software, OneFSv5.0, yielding a four fold increase in CPU speed by implementing symmetric multiprocessing (SMP) to harness all four cores of a quad-core cluster node rather than just one.
Then the faster I/O in Isilon’s new Accelerator-x has come through partnerships with Chelsio, maker of 10Gb Ethernet (GbE) adapters, along with Force10 and Fujitsu for 10 GbE Ethernet switches. The Chelsio adapters allow the Accelerator-x to interoperate with the Ethernet switches. Each storage node in an Isilon cluster can now pump data at up to 210Mbps, taking the maximum total throughput to 20Gbps with up to 96 nodes supported.
“This is 40 times what any NAS (network attached storage) system can achieve,” claims Phil Crocker, Isilon’s EMEA marketing director. The story for capacity is even better, Crocker added, with the maximum 2.3 Pb, exceeding any NAS- or SAN-based system by 100 fold.
The principle difference, though, from traditional NAS or SAN based solution is that clustered storage provides a single file system shielding the data centre from many complexities of storage management. Commodity disks and controllers can be used to build clusters that can be scaled out to multiple nodes, 96 in Isilon’s case, repeating for storage the earlier evolution of clusters for CPUs originally introduced in the 1980s by Digital Equipment with its Vax systems.
The time of clustering has come for storage because the combination of huge fast growing volumes of unstructured data such as video, and the rapidly falling price-per-Mb of disk storage (some 100 per cent in last five years), now make it cost-effective. The potential I/O bottleneck is solved by distributing controllers to each node, while Isilon’s software shields data centres from the burden of managing the data distribution across the cluster.