IOMeter 2008 Test Results
As we noted in a previous SSD round-up, though IOMeter is clearly a well-respected industry standard drive benchmark, we're not completely comfortable with it for testing SSDs. The fact of the matter is, though our actual results with IOMeter appear to scale properly, it is debatable whether or not certain access patterns, as they are presented to and measured on an SSD, actually provide a valid example of real-world performance for the average end user. That said, we do think IOMeter is a gauge for relative available bandwidth with a given storage solution. In addition there are certain higher-end workloads you can create place on a drive with IOMeter, that you really can't with any other benchmark tool available currently.
In the following tables, we're showing two sets of access patterns; our Workstation pattern, with an 8K transfer size, 80% reads (20% writes) and 80% random (20% sequential) access and our Database access pattern of 4K transfers, 67% reads (34% writes) and 100% random access.
The first thing you'll note here is how flat the standard SATA SSD's performance was across test patterns and IO queue depth. The IO queue depth set in IOMeter essentially represents higher levels of workload requests of the same access patterns simultaneously. The Vertex LE SSD was saturated here as we turned up queue depth. However, the entire group of PCI Express-based SSDs scaled up significantly at higher request levels. OCZ's RevoDrive line-up flattens out at a queue of 144 or so, with the RevoDrive X2 offering the best performance out of the offering. The Fusion-io drives both offer significantly more IO bandwidth and also flatten out at higher queue depths.
The LSI WarpDrive offers light workload performance at a queue depth of 12 to be somewhere in the middle of the pack, faster than the more consumer/workstation ioXtreme card but about half the performance of the expensive enterprise-class Fusion-io ioDrive. Like the WarpDrive, the ioDrive is also an SLC-based SSD. Finally, however, when we scale IO requests higher, the WarpDrive really begins to kick into high gear offering performance well in excess of 100K IOPs in our less than optimized Workstation test condition. The WarpDrive actually proved to be the fastest of the group under higher queue depths and it's impressive to see it overtake Fusion-io's best offering.
Our database access pattern showed much of the same performance grouping as we saw in the Workstation setup. In fact, with its higher mix of random write requests, all PCI Express solutions here scale to even higher IO throughput levels. The LSI WarpDrive broke 140K IOPs, which is impressive and the fastest score we've recorded under these test conditions, and by a long shot if you consider anything else other than Fusion-io's expensive $8K, SLC-based, 160GB ioDrive. Again, however, to realize the full potential of the WarpDrive, you have to exercise the product with lots of concurrent requests. Average client workload performance is going to look at lot more like the numbers at an IO queue depth of 12 here. We should also note that CPU utilization across all test runs with the WarpDrive, hovered around the 8 - 10% mark, which is not as good as we expected for a hardware RAID engine, but still reasonable when you consider the throughput of the product.