EMC Corp.'s latest flash strategy update reinforced the point that solid-state PCI Express cards in servers will play a role in enterprise data storage with IT shops in need of performance acceleration.
But whether or not PCI Express (PCIe) flash ultimately shakes up the industry and appeals to the typical data center remains open to debate. The solid-state technology has pros and cons, and enterprise IT shops tend to be wary of change. Still, no one can dispute that the cards supply blazingly fast performance, either as cache or primary storage.
The main advantage of PCIe cards is their ability to reduce latency. They connect directly to the PCIe bus and bring the flash and data closer to the CPU. They eliminate the overhead of traditional storage protocols, and -- under the right conditions -- deliver a level of performance that EMC now concedes is an order of magnitude better than the Serial-Attached SCSI (SAS) and SATA solid-state drives (SSDs) that it has been selling since 2008.
David Floyer, chief technology officer of Wikibon, a community-focused research and analyst firm based in Marlborough, Mass., predicts that high-speed PCIe cards in enterprise servers will transform the industry within five years. He said the cards will improve productivity so much that even the typical data center will pay the extra price for them, mainly as they put in new systems.
"It's going to be [the] fastest migration of a technology in history, and the reason is that the business value created from using this technology to its full [potential] is so overwhelming that it'll be a race to do it. Companies that don't get it done quickly will be at a significant disadvantage," Floyer said, likening the scenario to the rapid shift taking place with consumer devices.
Floyer envisions a data center world in five years that consists of PCIe flash in servers for top-tier data; a second tier consisting of about 15% of overall data on flash-only arrays that will increasingly become PCIe-based; and a bottom tier with about 80% of capacity mixing disk, tape and possibly some flash for cache. He called the bottom tier "the cheap layer [that] you hope not to access."
By contrast, Marc Staimer, president of Dragon Slayer Consulting in Beaverton, Ore., said he is unconvinced that PCIe cards will hold appeal for server-based storage beyond those who focus on use cases such as high-performance computing, financial trading and online transaction processing.
Staimer said most IT shops use virtualized servers, and those servers require shared storage to take advantage of important features such as moving virtual machines. That's the reason most companies use PCIe mostly as cache in a virtual environment, according to Staimer.
"I'm a tad cynical about this [PCIe] market," Staimer said. "I don't see it the way other people are calling it. I don't see the value proposition the same way. I'm hard-pressed to see the general use case. There's a reason we went to shared storage. Data protection is a lot easier; management is a lot easier. If I have 1,000 servers and 1,000 PCIe cards, or more, I can't afford the management."
 
PCIe flash performs better -- at a price
Dennis Martin, founder and president of Arvada, Colo.-based Demartek LLC, said that a single enterprise PCIe card outperformed a single enterprise SAS or SATA SSD in all test cases at his company's lab.
When a PCIe SSD is used as primary storage, Demartek typically sees latencies of less than 1 millisecond -- in the range of 100 microseconds to 1,000 microseconds (which is equivalent to 1 millisecond) -- depending on the workload. Martin added that throughput rates tend to range from several hundred megabytes per second to well over 1 GBps.
Demartek has seen latencies well below 1 millisecond for enterprise SAS and SATA SSDs, but not as low as enterprise PCIe SSDs. The high mark for throughput rates with individual SAS SSDs was 500 MBps, according to Martin.
"My general conclusion is that it's mostly about what you need and how much you're willing to pay to meet that need. You will get outstanding performance for PCIe SSDs, but they are the most expensive form factor for SSDs today," Martin said. "You will get great performance for SAS and SATA SSDs for considerably less money than PCIe SSDs, but at more cost than hard-disk drives."
Joseph Unsworth, a research vice president focusing on NAND flash and SSD technology at Stamford, Conn.-based Gartner Inc., said the PCIe market has yet to meet his expectations, with unit shipments last year of only about 450,000.
He listed factors such as price and lack of competition against Fusion-io and standardization of drivers as reasons for that. But, he predicted PCIe will remain poised for high growth as the dominant SSD interface for enterprise servers for years to come.
Competition is heating up. Besides EMC, all-flash array startup Violin Memory also began selling PCIe cards this month. PCIe startup Virident Systems scored an OEM deal with Seagate in January, and many people in the industry believe EMC uses Virident's FlashMax II cards as its XtremSF PCIe flash.
Another knock on PCIe is the card's ability to boost application performance only in the server in which it's installed, with no ability to share its resources with other physical servers. QLogic Corp.'s Mt. Rainier host bus adapter card, and Sanbolic Inc.'s Melio clustered file system address the PCIe limitation, as do virtual storage appliances such as Fusion-io's ION Data Accelerator software. Other vendors have products in the works, but it's unclear when they will come to market.
"The problem with that kind of sharing is that all the other servers are sharing at the [slower] speed of interconnect -- 1 Gig or 10 Gig," Staimer said. "I haven't seen any of the PCIe-sharing capabilities take off in the market."
 
Value is in the software
Ray Lucchesi, president and founder of Silverton Consulting Inc. in Broomfield, Colo., said the market is starting to realize the advantages of PCIe, especially as the flash in the cards gets cheaper and the technology becomes more widely available in the wake of Fusion-io's success.
"The value is moving from the hardware to the software," Lucchesi said. "Some of the value that EMC brings is in the caching software, and ultimately, as they get their caching software tied into their storage controller caching, then it becomes even more impressive. A lot of other companies -- IBM, NetApp and others -- are working on the software to help integrate the PCIe SSD cache into the storage sub-subsystem cache so that data can flow to wherever it's needed fastest and provide the best performance."
Floyer claimed that any problems with PCIe cards will soon be solved.
"You will absolutely have standard servers with PCIe cards," Floyer said. "Are there going to be holdouts? Are there going to be areas where disk is going to be? Sure, but not many. In this case, you will make that transformation. You will move everything over to PCIe, and you will rewrite the applications to take advantage of it."