The dash for flash
Can solid state -flash-memory technology replace hard disks? Performance against sound promising in principle, but the results must be viewed in context-for now.
When it comes to electronic storage, the traditional answer has always been: more. Running out of capacity? Buy more drives. Need more I/O bandwidth? You probably need more drives so you can split the workload between them. Redundancy? Well, that's got to be more drives.
Now an alternative has appeared that could get server users off the carousel of consumption and onto a different track. It means dumping the disk and moving to the same solid-state memory technology as that employed in the server blade itself.
The density of non-volatile flash memory has increased to where it has become possible to squeeze tens of gigabytes of storage into the space of a single 2.5in hard drive. Let's face it, you can get 8Gb into an iPod Nano, and that has to have a display, buttons and an earphone socket.
Now the memory makers are turning their attention to applications in IT by packing the chips into storage drives for laptops and for servers. Intel believes there is a huge market waiting to be tapped in solid-state flash drives: Pat Gelsinger, co-general manager of Intel's digital enterprise group, claimed at the company's last developer forum in San Francisco that, even with drives as small as 32Gb, data centre owners will want to trade rotating disks for solid-state storage.
"Capacity will not be the buying criterion," he claims; instead, they will be looking at the higher I/O performance that flash memory is meant to provide. Gelsinger reckons the improvement can be as much as 50-fold. And, don't forget the green dimension: Intel insists that a solid-state drive can consume between 20-25 per cent of the power of its disk-based counterpart.
Intel has a motive for believing that flash has a rosy future, and not just in servers but in laptops too: the company has a 50 per cent stake in a company - IMFT (short for IM Flash Technologies) - that makes the type of flash memory expected to go into these drives. Intel's co-owner of IMFT, Micron Technology, launched its own batch of disk replacements in October 2007, soon after Gelsinger's speech.
You need to look a little further into the vendors' claims when it comes to evaluating the trade-off between solid-state and disk drives, although benchmarks do point to solid-state having the definite upper hand in terms of performance. Intel's power claim was against the figures shown on the datasheet for a Seagate 15k Savvio, not from a live test. And the I/O performance figures that Intel quoted were for reads only.
As digital-camera users can testify, write speeds on flash cards are not so good - it can take a while before the green light that signifies a write-in-progress goes out on a digital single-lens reflex camera. A basic solid-state drive that uses just flash memory is likely to have a write performance somewhere in the 50-100Mbps range. Seagate claims a sustained throughput for its 15k Savvio disk drive of 80-110Mbps. The headline write performance of flash-only drives is respectable, but not spectacular. However, going forward, manufacturers are likely to only ship uncached flash drives for laptops, not servers.
Does the relatively laggardly write performance of flash memory chips matter in practice? Maybe not. Transaction processing, the environment where people really care about how many I/O operations you can do in a second, tends to be more write-intensive than many other applications, which are more read-centric. But software companies have been working around the limitations of disk drives for years, and one of the optimisations they use works for flash-based drives too.
"In Oracle, the concept of delayed block cleanout, where blocks are only written when needed, gets around write performance issues in many cases, in a high volume write situation was the only time it makes a significant difference," says Mike Ault, a database-tuning consultant and co-author of a book on performance optimisation with solid-state drives. "On the previous generation of drives, using high-end memory, we saw over a 140 times improvement in read performance using an Oracle database. For write performance we only saw a 30 to 40 per cent improvement, but whether that was due to Oracle overhead or the solid-state drive wasn't clear."
A solid-state drive, such as the RamSan-400 made by Texas Memory Systems, can sustain more than 1Gbps of throughput, according to its maker, and it does this through the use of caching. The first level of memory inside it is not flash, but the same kind of random-access memory (RAM) as found on the server's printed-circuit board (PCB). The RAM can absorb writes as fast as main memory in the server itself, then spool the edits out to flash as that becomes available.
Dense and sense
One problem that faces the flash-memory drive is capacity. Some 20 years ago, pundits predicted that solid-state memory would offer more capacity-per-dollar than disk drives. That crossover never happened, and unless you believe in the most optimistic predictions about flash memory, it is not likely to any time soon. But, when it comes to server applications, flash's apparent lack of storage density may play to its advantage.
Capacity is not such a big issue in transaction-oriented server applications as in desktop and even portable machines, claims Dean Klein, vice president of memory system development at Micron Technology. "Look at the densities used in these systems. They are small: 30Gb; 80Gb; maybe 160Gb. That is not big, and it's within striking distance for flash," he says.
Ault claimed that capacity is not the key criterion for picking drives if you are concerned about performance. In fact, using a small number of high-capacity drives can be problematic because it forces more transactions onto a smaller number of drives, leading to bottlenecks in the I/O controller as well as the storage controller.
With their lower capacity, solid-state drives can offer better granularity than disk-based storage based on higher-capacity designs. Within the flash-memory drives themselves, there is scope to improve throughput by exploiting parallelism.
"I always stress that capacity has two components: storage space and I/O speed. You can get a terabyte disk drive, but its I/O capability is still limited to about 200 I/Os per second linear and even less with non-linear I/O. This huge storage capacity in drives, with even the lower size drive volumes still at 72Gb or more, means that, if you size for the I/O rate you need, you usually over-specify."
Ault recommends a visit to the benchmark site put together by the Transaction Processing Performance Council (www.tpc.org). "Download the detailed reports: you'll see that they can utilise from 100 to 400 disk drives for just a terabyte of data," he says. "My 18-drive array with 72Gb drives is over a terabyte; in fact, the disk size to needed volume ratio is usually 20-30 to one just to obtain the necessary I/O rate."
As more suppliers move into solid-state storage, you can expect more trading of benchmarks to try to show which technology has the upper hand - transaction processing is only one possible application for servers. But, as the density of flash memory improves and vendors work on I/O optimisation to make the most of what solid-state memory can do, you can expect a lot more activity in this area.
"It will be an interesting time in the next two years," Klein predicts.