vol 8, issue 12

Data storage technology rethought

16 December 2013
By Philip Hunter
Share |
Hard drive storage graphic

Data storage technologies are being re-engineered to meet emerging IT needs

X-IO’s ISE storage

Hybrid: X-IO’s ISE storage subsystems combine HDD and SSD in a single managed platform

Pure Storage’s FA-400 platform

A combination of software-based and ASIC-accelerated always-on encryption to protect data both in the Flash and NV-RAM tiers

NetApp’s EF540 enterprise-class flash array

NetApp’s EF540 enterprise-class flash array delivers more than 300,000 Input/Output Operations Per Second (IOPS)

An array of oxygen-depleted titanium dioxide memristors built at HP Labs

Electric current through the memristors shifts the oxygen vacancies, causing a change in electrical resistance

Storage technology is getting a rethink. It might seem straightforward - but it isn't. Alongside the pros and cons skirmishing between hard-disk and solid-state drives, new challenges are coming from data-intensive applications and the need to analyse massive data sets.

The biggest single recent change in enterprise storage has been the growth in use of flash memory both on its own and in solid state drives (SSDs), but it has not always delivered fully on its promise. This is partly due to a mistaken belief that SSDs are inevitably more reliable than HDDs, but storage vendors are also culpable through over-hyping the products and failing to convey accurately how and where they should be deployed.

Most vendors have now woken up to these mistakes and this has driven innovation in techniques for organising hierarchical storage tiers so as to make the best use of both HDDs and SSDs to optimise the balance between capital cost, density, performance, and – increasingly importantly – energy consumption.

Enterprise storage requirements, meanwhile, have been evolving and adding new demands to storage architectures, with the rise of Big Data one of the most headline trends. Big Data has not suddenly sprung upon an unsuspecting IT function but has emerged incrementally through growing demand for intelligence derived from multiple sources of both structured and unstructured data.

Also, Big Data means that lots of data of different types has to be assimilated across the storage hierarchy, such that it is all readily available for high-speed analytics processing when required. This trend itself is creating demand for optimised storage hierarchies that enable rapid processing of large amounts of data from diverse sources.

The key point to note, whether talking about Big Data specifically or enterprise storage in general, is that SSDs should be deployed judiciously, and not ubiquitously. On the one hand it should not be ubiquitous where high performance and low latency are required; but equally SSD, or at any rate flash storage as cache, can have a place throughout the hierarchy, including lower layers towards archiving.

'The advent of Big Data means lots of high-capacity, low-dollar storage that can be migrated to fast disk and in-memory databases to analyse as necessary,' explains Blair Parkhill, vice president at X-IO Technologies, a vendor of integrated SSD/HDD storage systems incorporating software for optimising organisation and data placement within hierarchies. 'The advent of Hadoop for the cataloguing of all the input data is now helping.'

Hadoop is open source software designed to distribute data across commodity storage clusters, supporting failure detection and prevention at the application layer. It has become almost synonymous with Big Data, but not because it magically enables powerful analytics to occur. What it does do is help provide the underlying flexibility, robustness and performance at low cost by enabling use of commodity servers, with the ability to add these on demand, irrespective of what vendor they come from, or of what operating system they support – at least, that is the theory.

As Parkhill points out, Big Data does not itself call for greater use of SSDs, with organisation and optimisation being critical. 'Going all-SSD is overkill in almost all instances, because Big Data is about analytics, and analytics of the relevant data at that time,' he argues. 'We believe enterprise HDD or a hybrid of a small percentage of enterprise SSD and enterprise HDD gives the best IO density required at the lowest price.'

It is worth remembering though that Big Data is work in progress rather than a challenge that has now pretty much been solved, with further innovations in storage organisation still required. This is the view at HP (Hewlett-Packard), which acquired the UK software company Autonomy in October 2011 for £7.1bn to pursue its strategy for Big Data analytics. This acquisition became embroiled in controversy when HP subsequently claimed that it had been victim of irregular accounting practices, and became engaged in public slanging with one of Autonomy's founders, Mike Lynch.

This inevitably disrupted HP's assimilation of Autonomy technology, but it has nonetheless come out with IDOL (Intelligent Data Operating Layer), designed to extract meaningful information from various unstructured data sources, such as email, social media, audio and even video, as a prelude for analytics. Among other things it integrates with data structured using Hadoop so that it will fit well into many emerging storage hierarchies optimised for Big Data analytics.

Chris Johnson, vice president and general manager for HP Storage, admits, however, that there was demand for yet higher levels of performance around Big Data, and that Autonomy technology would play a big role in meeting this requirement. 'Big Data requires huge information resources that can be intelligently exploited for business advantage,' Johnson declares. 'Combining storage technology with Autonomy could be an interesting innovation in support of Big Data for the future. Watch this space.'

HP's idea here is to integrate the processing of unstructured data into a form ready for analysis into storage systems, which would greatly speed-up analytics applications and could be of benefit where near instantaneous decisions need to be taken in response to changing events in the field. This could for instance enable online or TV adverts for example to be targeted to individuals on mobile handsets on the basis of their immediate activities or location, rather than relying largely on historical analysis of known preferences as tends to happen at present.

While sophisticated data extraction has yet to be incorporated in storage systems, low-level processing, such as de-duplication to eliminate redundant data, already is. Specialist storage vendor Pure Storage performs in-line deduplication alongside data compression to cut data volumes before writing to its solid-state-based FlashArray. The company claims this can easily reduce the amount of data that has to actually be written by a factor of five, cutting the cost of flash storage.

Data de-duplication also has a role to play further down the storage hierarchy even though costs per bit are lower there, according to HP's Johnson. Furthermore, costs can still be saved lower down through use of tape-based technology. 'The storage of data throughout its life needs to incorporate high-performance and high-cost media, but also de-duplication to low-cost back-up appliances and further onto tape, which still has a role to play,' says Johnson.

Many data centres will therefore have three basic grades of storage – flash/SSD, HDD, and tape – with sub-divisions between them; and the challenge is to optimise the balance between the three to ensure that performance targets are met without over spending. This takes the storage strategy up to a new level of technological complexity.

Until now there has been a tendency within major IT projects to over-provision storage to cover-up for inevitable shortcomings in the management of the tiers. The process of storage provisioning is complex in any case, having to meet targets for capacity, performance, cost and disaster recovery, which can conflict with each other. This has meant that more storage capacity than is needed tends to be deployed and also too much of the higher grade, more expensive units, such as SSDs, as already noted by X-IO's Parkhill.

Software management trend

Remedies are emerging under the cloak of the 'Software-Defined Data Centre' (SDDC) proposals, with the overall objective of building on virtualisation to separate management and provisioning completely from the underlying hardware. SDDC is an architectural approach to IT infrastructure that extends virtualisation concepts (such as abstraction, pooling and automation) to all of a data centre's resources and services, aiming to achieve 'IT as a service'.

The overarching objective here is to enable capacity to be added as the demand requires, for individual projects or load increases, by dropping in storage, networking or processing units separately. Applied to storage, where Hadoop is playing a role, the principle is the same in that it should be possible to add SSDs, HDDs or tape systems, separately as needed, from any hardware vendor. However, in the practice of IT, principles are not the same as practicalities.

There is an irony here in that the term 'Software-Defined Data Centre' was largely coined by virtualisation market leader VMWare, now owned by storage vendor EMC, which has a vested interest in encouraging sales of its own hardware. This at first seemed to have led EMC to de-emphasise the value of software-defined storage because it would enable its customers to incorporate systems from other vendors; but the company has now realised that the game is up and that the data centre world is moving inexorably towards commoditised hardware under the umbrella of virtualised multi-tier management.

Against this background, where storage, computation and networking are separated within the SDDC, each of the three has to pull its weight in meeting overall demands, which are increasing all the time. While CPU performance continues to keep pace with Moore's Law, and network bandwidth has expanded at a similar rate both internally and over the wide area (through increased deployment of fibre among other things), storage systems have tended to lag behind.

In terms of capacity HDDs have kept up quite well even given the much-mooted Big Data boom, but they are falling behind in access speed, as was pointed out by Laurence James, products, solutions and alliances manager at storage and data management company NetApp. This is where flash storage comes in, with NetApp specialising in deploying this throughout the storage tier to ensure that performance targets are met, while avoiding over spending on it. Flash memory can be deployed as cache memory in front of HDDs right across the data centre, but particularly where high-performance is needed, to ensure that read and write times are not just sufficiently fast but also consistent, which is just as important.

'With the introduction of flash technologies, intelligent caching is key to ensuring the active data resides in the most appropriate tier,' says James. 'Automation is a must-have feature here and NetApp have a portfolio of flash-based products designed to optimise workload performance at the Server, Storage Controller, and Disk Array. For those workloads that require consistent low latency and response times, such as OLTP (Online Transaction Processing), all Flash Arrays such as NetApp EF540 are increasingly in demand.'

As James points out, different grades of flash are now available at varying price points, performance levels and lifespans. This needs to be taken into account when evaluating flash-based options, given that its big advantage is not just lower latency, but the fact that performance is consistent, as is its lifespan. HDD failures occur more randomly at an increasing level of probability with age and use, while for flash endurance is much more predictable, but does vary between the different grades.

'Flash has a much-improved failure predictability than older mechanical hard disk technologies,' James says. 'The challenge is that, depending on which type of flash is deployed, each has a defined endurance related to the number of program/erase (P/E) cycles per cell. Beyond this number of P/E cycles the cells become unreliable.' Where durability is the main requirement, SLC (single-level cell) flash might be preferred, while eMLC (enterprise multi-level cell) flash would be chosen where cost and capacity are more important.

Storage preferences are governed by differing application requirements, whose needs may need to be balanced within a shared SDDC. OLTP, as already noted, demands very low latency for large numbers of transaction involving individually small amounts of data, for which flash option is best suited. But this may not be the case for, say, a TV broadcaster playing-out video via a scheduled service. Although delay must be minimised, access latency will not be an issue if the video data is played-out sequentially, as is also noted by X-IO's Blair Parkhill: 'SSD is not for video as it is highly sequential. When architected right, you can get a lot of performance from striping SATA drives – and a good caching algorithm,' he says. The situation is more complex for on-demand video accessed by people at different times, with support for rewind and pause as well, when flash has a role to play keeping latency down – but even then most of the data can reside on suitable HDDs,'which is important to contain costs, given the huge size of high-definition video files.

'Yes, maybe flash will be used for caching,'but in general UPS (Uninterruptible Power Supply) backed up RAM in servers, along with good enterprise SATA drives or regular large capacity enterprise drives, work best for streaming video,' says Parkhill. 'Servers with adequate RAM attached to good, dense, reliable storage that can handle many video streams at once allow for high reliability and are the key to the growth in media.'

Given that video is accounting for an ever increasing proportion of the ICT world's digital data, this suggests that contrary to some predictions HDDs are in little danger of losing out to SDDs in pure volume terms. At the same time, however, SSD will be the critical point for many high-performance applications, including parts of the video distribution chain, and will be the main focus of continuing research and development, as' with the Memristor development programme that is currently being undertaken by HP Labs. *

Further information

Share |

Technology briefing finding the best way to cope with data write/re-write wear and tear...

Several misconceptions exist over how contesting technologies compare and contrast. One common one is that SSD, by having no moving parts, must automatically be more reliable and durable than HDDs. In actual fact, SSDs suffer from bit errors too and do wear out, more quickly than HDDs under heavy write loads.

The write process causes most of the wear because it requires erasing memory cells and then rewriting each bit, with the resulting heat changes causing gradual deterioration of the oxide layer insulating the floating gate transistors.

Cells are the building blocks of 'SSD flash storage', each comprising a single transistor and capacitor to store charge, and are now predominantly arranged in NAND structures where they are connected in parallel. NAND has been preferred to the alternative NOR structure, where cells are connected in series, because it allows smaller chip areas for a given storage capacity. As a result the cost per bit is less and power consumption is lower.

Originally each cell could only store just one bit in the SLC (single level cell) arrangement. But demand for higher storage densities and lower cost per bit, coming particularly from makers of mobile phones, which use exclusively solid state storage, has driven development of MLC (multi-level cells) that store more than one bit. Typically it is two bits, but it can be three bits per cell, as in TLC (triple level cell) flash.

MLC reduced costs and increased densities, but at the expense of durability and reliability, which was much more of an issue for enterprise storage than in mobile handsets, because of the much greater load of write operations.

With multiple bits per cell, the probability of misreading one of the bits is higher, which increases bit error rate and shortens the effective life because the degree of wear resulting in unacceptable reliability is reduced. This has led to development of enterprise MLC (eMLC) incorporating various techniques to improve both reliability and durability. Various algorithms have been deployed to reduce bit error rates, such as Hamming codes, which adds additional parity bits to correct one-bit errors and detect two-bit errors, the latter being important for two-level cells. Additional techniques have been deployed alongside or instead of Hamming, especially for TLCs with three bits per cell, such as low-density parity check (LDPC) codes.

Then various techniques are used to improve flash write endurance, the three most important being wear levelling, overprovisioning and write amplification reduction. Wear levelling works by spreading write activity evenly over the whole flash drive, so that it wears evenly, avoiding some cells deteriorating more quickly than others through heavy loading. This extends the average lifespan of a drive.

Second, over-provisioning, just as the name suggests, means making the drive bigger than it needs to be, typically by 20 per cent. This is trading-off size for lifespan, with the blocks lasting longer because rate of wear is reduced. Then the third technique is designed to reduce write amplification, resulting from the fact that flash memory endures many more write cycles during its life than HDDs, because existing data has to be erased or moved from cells before new data can be written.

When data is erased from HDDs, the operating system simply deletes the relevant index to the storage location from the table of contents, leaving the blocks as they are until they are subsequently overwritten; but SDDs must erase whole blocks containing data no longer needed first before writing new data.

Techniques such as the Trim command have been designed to inform the SSD which specific memory cells contain unneeded data, so that the drive can then, in its own time, clear just them, rather than deleting the whole block and having to relocate some items of data elsewhere on the drive.

Storage futures – component levels is the memristor actually any closer?

SSD has one fundamental constraint in that it can only tolerate a limited number of erase and write operations, as measured by the P/E (programme/erase) cycle number. Furthermore, the P/E count is actually declining as chip fabrication geometry continues to shrink to increase CPU performance and memory density.

At present, SLC flash wears out after about 100,000 P/E cycles of each memory cell, two-bit MLC after 10,000 writes, and three-bit TLC flash just 1,000. This is a constraint, but can just about be tolerated. However, these figures apply only to the current generation of flash based on sub-50nm fabrication technology.

The P/E count is roughly halved for each 10nm drop in fabrication geometry as tolerances are reduced, so would become unacceptable if it got as low as sub-10nm, or even sub-20nm, which is definitely going to be reached. Major R&D labs have been investigating fundamentally different flash storage technologies based on transistors and capacitors, with HP's Memristor widely tipped as the likely successor.

The memristor is an enhancement of the normal resistor, whose role is typically to control the level of electric current within devices, by adding memory capability. The idea then is that data bits would be stored as different levels of resistance, which would be retained even when the device is switched off. These levels could be changed by applying a given voltage to the cells, which is the mechanism for writing data to the devices.

HP first demonstrated a proof-of-concept memristor in 2008, and claims that it is now close to DRAM in switching speed, while being bit-addressable and with much greater write endurance than conventional flash, currently about 1 million P/E cycles having been achieved in lab tests. HP has not given a launch date yet for memristor-based devices, but rumours suggest it could well be before the end of 2014.

Related forum discussions
forum comment To start a discussion topic about this article, please log in or register.    

Latest Issue

E&T cover image 1410

"Climate change in Antarctica is leading to interest in extracting the region's natural resources, but there's the small matter of a treaty."

E&T jobs

E&T Marketplace

The essential source of engineering products and suppliers.

E&T podcast

Tune into our latest podcast

iTunes logo

Subscribe

Choose the way you would like to access the latest news and developments in your field.

Subscribe to E&T