|Introduction and Background|
|Disruptive technology; it's a term thrown around these days by industry marketing types and quite frankly it's just plain getting worn out as of late. In the mid 90s, a Harvard Business School Professor coined this phrase to describe a product innovation that breaks current convention and exceeds market expectations so vastly that market leaders might not see it coming and perhaps even the market itself doesn't know how to react. Since the term was brought forth, there were many innovations over the years that overzealous marketing types have hailed as "disruptive technologies," though obviously, in reality, this level of innovation is on a different scale all together. However, a truly disruptive technology is unequivocally and unmistakably a game-changer.
Let's take NAND Flash memory for example. A few years ago, the camera market was turned on its ear by the new storage technology and it has changed the landscape forever with digital cameras displacing film cameras almost completely now. Then USB Flash sticks came along and of course the floppy drive then became extinct. Disruptive enough for you? In addition, it has become clear that there is yet another market the NAND Flash chip has set its disruptive sights on--secondary computer storage. There is little question at this point, that SSD (Solid State Drive) technology will eventually supplant traditional rotational media, with perhaps the exception of large bulk storage arrays, at least for the time being. Though it is debatable when the transition will reach critical mass beyond a few drives shipped in notebooks, some higher-end desktop configurations and the DIY niche'.
However, is even the SATA SSD as we know it today, eventually going to end up on a proverbial endangered species list? We'll leave you pondering that question as we take a competitive look at two SSD solutions that peg the performance scales with very different approaches to the technology.
Storage Of The Future - SATA SSD or PCIe? - Click for high res.
We've certainly heard of Fusion-io's bleeding edge PCI Express-based SSD solution (top right) but to date haven't yet gotten the chance to check it out on the test bench. In addition, though we've put Intel's wonderfully fast X25-M SSD through its paces in stand alone testing, imagine what it would be like with up to four drives in RAID 0. You see where we're going here, a battle royal of what is arguably some of the fastest SSD storage technology money can buy right now.
So the stage is set but before we get into ripping up benchmarks, let's expand on what we think might be one of the paths solid state storage of the future might take on its disruptive journey through the valley of the hard disk dinosaur. Let's drop down for a closer look at Fusion-io's 160GB ioDrive. Does SATA have to watch its back?
|Competitors: Fusion-io's160GB ioDrive|
|As we mentioned, a disruptive technology is something that not only sets a new standard of performance, features, or both, but often does so in a way that drives a complete resurfacing of the competitive landscape. Clearly, that's what NAND Flash storage technology is affording us and there are some rather bright minds out there looking to take advantage of the opportunity. Fusion-io currently offers an enterprise-class PCI Express SSD card that boasts ridiculously high read and write throughput numbers, but let's consider why and how for a bit before taking a closer at the technology.
First, let's look at what exists in current desktop storage architectures (just to keep it simple) and how things are hooked up. Currently, whether you're plugging in an SSD or standard spinning hard drive, you're working with two interfaces to get the data off the drive to the host processor. The SATA (Serial ATA) interface needs to be accommodated so the drive can be accessed via the legacy SATA command set. Over the years, though the ATA command set has migrated from PATA (Parallel ATA) to a higher speed serial interface, the low level command set hasn't changed in order to maintain backwards compatibility. Traditionally, the Southbridge controller on the motherboard (or perhaps on a discrete SATA controller) offers a number of SATA ports to attach your hard drives too. But how does the SATA controller bolt up to the rest of the system architecture?
It's called bridging. The SATA controller, whether it resides on a discrete card or in the Southbridge chipset, has to be bridged to PCI Express or other native interface so the host CPU can have access to the data. It's not rocket science obviously. That's why they call it a SouthBRIDGE. However, what does bridging do for us, besides affording the host processor the ability to talk to an otherwise foreign or "not native" interface (SATA) over a native one (PCIe, etc.)? In short, nothing. It just adds latency and slows things down. Even when bridging two high speed serial interfaces together like SATA and PCI Express, you're adding latency going from one domain to the other. It's that simple. Again, this is a necessary evil however, because we're not going to just rip out generations of ATA command set compatibility that easily. And yes, an even faster 6Gb/sec third generation SATA interface is coming.
However, with solid state technology, like NAND Flash, at our disposal, it becomes much easier to just bolt up to the native PCI Express interface and eliminate the latencies of bridging to SATA, as well as the current bandwidth limitations of 3Gb/sec SATA, which SSDs are already very close to saturating. It is with this disruptive approach that Fusion-io has entered the market.
To look at the card itself you'll note its pure simplicity and elegance. There are but a few passive components along with a bunch of Samsung SLC NAND Flash and Fusion-io's proprietary ioDrive controller. The 160GB card we tested above is a half-height PCI Express X4 implementation. From a silicon content standpoint it's a model of efficiency, though the PCB itself is a 16 layer board, which is definitely a complex quagmire of routing for all those NAND chips.
ioDrive controller block diagramDropping down to the block diagram of the ioDrive's controller, we see the PCIe Gen1 X4 interface on the card offers 10Gbps of bandwidth, bi-directional (20Gbps total), which obviously offers more than enough headroom for data throughput as NAND Flash technology continues to scale. The picture above is an over-simplification, however, and the real magic of Fusion-io's proprietary technology resides in the "Flash Block Manager" block of this diagram. The design implements a 25 parallel channel (X8 banks) memory architecture, with one channel dedicated to error detection and correction, as well as self-healing (data recovery and remapping) capabilities for the flash memory at the cell level. By way of comparison, Intel's X25-M SSD implements a 10 channel design.
In totality, the solution, along with its Samsung flash memory, is specified as offering up to 750MB/sec of available read operation throughput and 650MB/sec for writes. You'll also note that the ioDrive is rated for 50 microseconds read latency, which is pretty much standard for SLC flash-based SSDs these days. If you consider the average standard hard drive is specified for 8 - 15 milliseconds access times, it's obvious SSD technology is orders of magnitude faster for random access requests.
Fusion-io's ioManager Control Panel
Low-level formats performed before each benchmark run...
Above is a screen shot of Fusion-io's rather simplistic ioManager software tool for managing the ioDrive volume in our Windows Vista 64-bit installation. We should note that there are three options for configuring the drive - 1: Max Capacity, 2: Improved Write Performance (at the cost of approximately 50% capacity) and 3: Maximum Write Performance (at the cost of approximately 70% capacity). We low-level formatted the ioDrive before each new benchmark test with option 1, since we felt this would likely be the most common usage model.
|Competitors: Intel's X25-M RAID 4-Pack|
|We've taken a detailed look at Intel's MLC flash-based X25-M SSDs a couple of times here already and we've found that they are one of the all around fastest SSD drives currently on the market. That being said, the obvious potential for performance scaling with setting up four of these drives in a RAID 0 configuration is promising, since theoretically we could achieve up to 1GB/sec of read bandwidth and 280MB/sec for writes. Though our available write throughput would be lower than the Fusion-io solution, available read bandwidth of a 4xRAID 0 array of these drives is enormous to be sure.
On a side note, we're also sure many desktop end users have found themselves clamoring for a mechanical solution inside a standard ATX chassis, that can support mounting an SSD with its 2.5" hard drive form-factor, much less four of them in tandem. We discovered a solution to this problem that is both elegant and highly functional, especially given a multi-drive installation.
Toggle this check-box (right) if your numbers look low...
Above you can see we've found a four bay 2.5" hot-swappable drive cage from Supermicro that fits nicely into a standard 5.25" drive bay. A single 4-pin molex power connector provides power for the entire cage and a single 4-to-1 SATA cable allows connectivity for each drive in the cage to all of the host controller ports that are required. We decided to plug our four SSDs right into the ICH10R SATA controller on an Intel X58-based motherboard. Finally, there is a small high speed fan in the back of the unit which can be disabled with a jumper setting thankfully, since the fan itself is really loud and SSDs don't need much airflow and produce very little heat comparatively to their 2.5" spinning drive counterparts. Even with the fan completely disabled, thermals in the cage were just fine with the SSD 4-pack. And of course, this rendered our new high-end SSD RAID storage solution completely silent.
Once we had everything installed mechanically, it was time to setup our RAID array and initialize the volume for testing in our operating system (Vista 64-bit). One small snafu that plagued our benchmark results was the Vista "Enable advanced performance" option you see captured in the screen shot above. We found on some benchmark runs, particularly with HD Tach, that a performance degradation occurred that was inexplicable. It was only after we unchecked this box, that we saw performance return to expect levels for the configuration we were testing. Re-checking the box after this had little affect on performance. This anomaly was observed very consistently, so much so that we'd suggest unchecking this box if you're installation includes an Intel SSD. We have alerted Intel to this issue and are awaiting a further detail. Regardless, we're confident in the following benchmark numbers you're about to see with the setup we had configured for testing.
|The Setup, Methodolgy and SANDRA|
Our Test Methodologies: Under each test condition, the Solid State Drives tested here were installed as secondary volumes in our testbed, with a standard spinning hard disk for the OS and benchmark installations. The SSDs were left blank without partitions wherever possible, unless a test required them to be partitioned and formatted, as was the case with our ATTO benchmark tests. Windows firewall, automatic updates and screen savers were all disabled before testing. In all test runs, we rebooted the system and waited several minutes for drive activity to settle before invoking a test.
On a side note, thanks to our friends at DV Nation for their assistance in supplying the Fusion-io ioDrive we used for testing. If you're looking for high end SSD storage, they're a good place to start.
Also, you'll note that we performed all of our SSD RAID testing with the Intel X25-M drives on an Intel X58 chipset-based motherboard via its ICH10R Southbridge SATA controller. This controller offered peak RAID 0 performance versus even the hardware-based RAID controllers we had in the lab for testing.
In our SiSoft SANDRA testing, we used the Physical Disk test suite. We ran the tests without formatting the drives and both read and write performance metrics are detailed below. Please forgive the use of these screen captures and thumbnails, which will require a few more clicks on your part. However, we felt it was important to show you the graph lines in each of the SANDRA test runs, so you are able to see how the drives perform over time and memory location and not just an average rated result.
Looking at these preliminary, high level numbers from SANDRA, it's apparent that there are somewhat diminishing returns in read performance, as we scale from two to four SSDs in a RAID 0 configuration. However, write performance almost doubles linearly.
The Fusion-io ioDrive however shows a much less saw-toothed performance curve, with flat 650+MB/sec performance whether looking at read or write throughput. Looking at the graph, you can tell this technology means business and a sequential access pattern like a SANDRA benchmark run isn't going to saturate its throughput capability.
|ATTO Disk Benchmark|
ATTO is a more straight-forward type of disk benchmark that measures transfers across a specific volume length. It measures raw transfer rates for both reads and writes and graphs them out in an easily interpreted chart. We chose .5kb through 8192kb transfer sizes over a total max volume length of 256MB. This test was performed on blank, formatted drives with NTFS partitions.
Fusion-io 160GB ioDrive
ATTO shows a very similar performance progression for the Intel RAID arrays, whether using a two or four drive installation. Read performance is in the mid to high 500 or 600MB/sec range for the Intel RAID arrays but higher in the instance of a four-drive setup. Write performance with the Intel-based RAID 0 arrays again pretty much doubles from going from two to four hard drives. In addition, obviously, small block transfers from .5K to 4K are not a strong suit for the SSD. In any case, the performance graphs for the Intel RAID arrays are impressive.
And again, the Fusion-io drive shows flat-lined read and write performance, this time within the 800MB/sec territory. Also again, the performance bars scale so linearly, it's as if at each point on graph, the Fusion-io drive is maximizing throughput from its NAND Flash array to its fullest potential.
|HD Tach Testing|
Simpli Software's HD Tach is described on the company's web site as such:
Fusion-io 160GB ioDrive
HD Tach shows a somewhat different picture though and this time the Intel SSD RAID 0 arrays move up a notch while the ioDrive drops a notch. The same relative performance pattern emerges with both solutions and the ioDrive is pretty much pegged at flat read and write performance numbers across its volume. There was a large drop at the 60GB area, which may be an anomaly in the test, though we can't be completely sure.
|Next we ran the Intel RAID 0 arrays and our Fusion-io ioDrive through a battery of tests in PCMark Vantage from Futuremark Corp. We specifically used only the HDD Test module of this benchmark suite to evaluate all of the drives we tested. Feel free to consult Futuremark's white paper on PCMark Vantage for an understanding of what each test component entails and how it calculates its measurements. For specific information on how the HDD Test module arrives at its performance measurements, we'd encourage you to read pages 35 and 36 of the white paper.
We really like PCMark Vantage's HDD Performance for its real-world application measurement approach to testing. From simple Windows Vista start-up performance to data streaming from a disk drive in a game engine and video editing with Windows Movie Maker, we feel that these tests better illustrate the real performance profile of an SSDs in an end user/consumer PC or Workstation usage model.
This specific set of PCMark Vatange HDD tests are generally read-intensive measurements, whether reading files like images in Windows Photo Gallery or scanning the hard drive for threats in Windows Defender. The Vista Startup and gaming tests are indicative of application loading performance, which is also a read-intensive operation. Regardless, here we see two Intel SSDs in RAID 0 offer around 80 - 90% of the performance of a four drive SSD array, with the exception of Windows Photo Gallery, where 50+% performance gain is observed when scaling to 4 drives.
The Fusion-io 160GB ioDrive offers a 2 to 2.5X performance advantage across the board for this series of benchmarks, with Vista Startup and Gaming performance being its strong suit. Unfortunately, the ioDrive at this point can not be set as a bootable volume, though we are told Fusion-io's next generation, lower cost ioXtreme drive is expected to have this capability. In any event, you can't help but be a little blown away by how this product performs versus a four-drive RAID 0 setup of some of the fastest traditional SATA SSD technology available on the market today. Granted the ioDrive's cost is certainly several times that of the aforementioned SATA solution ($7200 for 160GB, $2995 for 80GB), so this level of performance is of course to be expected. In addition, we all know this sort of bleeding edge technology in general has a steep cost reduction curve, as things mature in the market for volume requirements and economies of scale in manufacturing can be achieved.
|PCMark Vantage (cont.)|
|Our next series of Vantage tests will stress the current weakness of most NAND Flash, that being write performance. Applications like video editing, streaming and recording are not what we would call a strong suit for the average SSD, due to their high mix of random write transactions.
Interestingly here, the performance of our Intel X25-M RAID 0 arrays completely skews the graph, posting over two times the score of the ioDrive in the Windows Media Center test. In their white paper, Futuremark claims this specific test "measures concurrent disk drive performance of Media Center tasks, including SDTV video playback, SDTV video streaming to a Windows Media Center Extender and SDTV video recording," with about 50% read and 50% write operations. Perhaps here we're seeing a bit of immaturity in tuning for these standard Windows applications for the Fusion-io product. The other test that doesn't bode as well for the ioDrive is the Movie Maker test, which is also more heavily weighted toward write performance (46%).
|The IOMeter Question:
As we noted in a previous SSD round-up article, though IOMeter is clearly thought of as a well respected industry standard drive benchmark, we're not completely comfortable with it for testing SSDs, as well as comparing their performance to standard hard drives. The fact of the matter is, though our actual results with IOMeter appear to be accurate, it is debatable whether or not certain access patterns, as they are presented to and measured on an SSD, actually provide a valid example of real world performance, at least for the average end user. That said, we do think Iometer is a solid gauge for relative available bandwidth with a given storage solution. Regardless, here's a sampling of our test runs with Iometer version 2006.07.27 on our SSD RAID pack versus the ioDrive.
Here we dropped in a single Intel SSD as well, for a reference baseline metric. In our database or server access pattern, which is comprised of completely random access with 33% dedicated to write transactions, you can see the Intel X25-M RAID array scales dramatically as you add more drives to the equation and turn up the number of IO requests per target. Even more interestingly, you can see that at a relatively low workload of 8 outstanding IOs, the ioDrive is nearly just on par with the 4-disk Intel array. However, turn up the number of IO requests and ioDrive obliterates the Intel RAID packs with over two times the number of IOPS (Input and Output Operations Per Second).
In our Workstation access pattern, which consists of only 20% write operations and a bit more sequential access work, there are some rather interesting observations. Again, as you add drives to the Intel RAID 0 array, performance scales relatively well, though the limitation we saw in some of our other synthetic benchmarks like HDTach and ATTO, manifests itself again, with a two drive RAID setup offering a solid performance trend even compared to the four drive array.
For the Fusion-io ioDrive, as we tax it with a larger number of requests, its performance curve just shoots for the moon. Though we didn't plot it here, if we turned up the number of outstanding IOs to 2048, the drive actually exceeded its 100K IOPS theoretical top-end performance (103K IOPS to be exact) - which we will of course admit is just an insane amount of available bandwidth.
So, what have we learned here today? First, there is little question that RAID 0 performance with Intel's X25-M SSD is ever-more impressive than a single drive installation, as we expected. That said, at least from a cost/performance ratio perspective, the sweet spot seems to be a two drive RAID 0 setup, which offers about 80% if the performance of a four drive array in terms of read performance, but perhaps not as much in terms of write intensive operations. Then of course there is the practicality of a four drive RAID 0 setup, which invokes four points of possible failure in the event an SSD should go bad. Obviously this isn't the sort of setup you should store critical files on, though the reliability of Intel's X25-M SSD has been solid and as an OS and application volume, its not quite as risky as it appears on the surface. Alternatively, we'd advise a strong hardware RAID controller solution behind it and a RAID 5 setup, if a quad-SSD array is your goal.
For Fusion-io's ioDrive, our enthusiasm for its ridiculously fast technology is only tempered by its price point. What's most impressive about the ioDrive is that it is capable of delivering seemingly red-lined performance in both read and write intensive workloads and as you pile on concurrent IO requests, the little half-height PCIe card just sucks them up and spits them back at you faster. Not only does Fusion-io's SSD technology circumvent the looming SATA bottleneck, but it also accommodates for some of the intrinsic limitations of the current generation of NAND Flash technology, which in our humble opinion, is without question a game changer.
When we embarked on our initial benchmarking efforts with the products and test system setups in this article, we were going in with the thought process of showing you what Fusion-io's ioDrive can do and also an alternative SSD setup with Intel's SSD, that costs a lot less than Fusion-io's enterprise-class technology. However, as we ripped through each pass of our various test suites, it became apparent that in reality here, we were comparing apples and oranges (though we hate to use that cliche'). Currently the 160GB ioDrive we tested lists for $7200. Obviously this is not a product targeted for even the highest enthusiast desktop end user but rather an enterprise-level SAN (Storage Area Network) box, database or file server. That said, as we also eluded to the notion that what Fusion-io has here with their io-Drive technology capitalizes so well on the disruptive nature of NAND Flash, that it could very well become a disruptive technology in and of itself.
The folks at Fusion-io like to refer to their ioDrive technology as "another memory tier" versus a new storage medium. They don't claim to be interested in displacing bulk disk storage and there is little question, at these price points, they will anytime soon. However, they speak of a system architecture that more cleanly fills the performance hole between local system memory (DRAM) that operates at nanosecond access times, and current spinning disk technology that operates at millisecond access times. That makes sense to us and directly attaching to the architecture via PCI Express, versus bridging from SATA to PCI Express, is the really right/best way to do it. Though SATA SSDs have enormous momentum and an ever-increasing adoption rate in the market right now, it's hard not to wonder how a direct-attach NAND Flash technology like the ioDrive might impact the market and future generation computing architectures.
I'll go out on limb here a bit and say that a couple of years from now, things might look very different for SSD technology and that the SATA interface itself might very well be approaching its twilight years. When you consider the design wins Fusion-io has already reeled in from big names like HP, IBM and Samsung and the fact that other industry juggernauts are rumored to have some skin in the game (Dell), there is little question they've got traction and that this technology is compelling across many markets and applications.
Conversely, especially in the consumer space, there is obviously still plenty of life left in SATA and today we've shown you RAID configurations that offer a ton of bandwidth and performance that, though not in-line with what the ioDrive has to offer, cost several thousand dollars less and are certainly more practical cost-wise for the average end user and even most workstation professionals. Again, however, these are two very different technologies and the ioDrive is in a class by itself currently. Though looking at the benchmark numbers and comparing the two technologies was a fun and interesting ride to be sure. We've historically given the Intel X25-M SSD a product rating here at HotHardware, so we won't re-hash that outcome again in this article. The X25-M is clearly one of the best SATA SSDs on the market today. Fusion-io's ioDrive thoroughly impressed us out of the gate as well, however, and we can't wait to look at their more end user-targeted ioXtreme follow-up product that is coming to market very soon. Stay tuned and we'll be sure to fill you in with explicit detail.
Fusion-io 160GB ioDrive
Fusion-io 160GB ioDrive