The IOMeter Question:
As we noted in a previous SSD round-up article, though IOMeter
is clearly thought of as a well respected industry standard drive benchmark,
we're not completely comfortable with it for testing SSDs, as well as
comparing their performance to standard hard drives. The fact of the matter is,
though our actual results with IOMeter appear to be accurate, it is debatable
whether or not certain access patterns, as they are presented to and measured on
an SSD, actually provide a valid example of real world performance, at least for
the average end user. That said, we do think Iometer is a solid gauge for relative available bandwidth with a given storage solution. Regardless, here's a sampling of our test runs
with Iometer version 2006.07.27 on our SSD RAID pack versus the ioDrive.
Here we dropped in a single Intel SSD as well, for a reference baseline metric. In our database or server access pattern, which is comprised of completely random access with 33% dedicated to write transactions, you can see the Intel X25-M RAID array scales dramatically as you add more drives to the equation and turn up the number of IO requests per target. Even more interestingly, you can see that at a relatively low workload of 8 outstanding IOs, the ioDrive is nearly just on par with the 4-disk Intel array. However, turn up the number of IO requests and ioDrive obliterates the Intel RAID packs with over two times the number of IOPS (Input and Output Operations Per Second).
In our Workstation access pattern, which consists of only 20% write operations and a bit more sequential access work, there are some rather interesting observations. Again, as you add drives to the Intel RAID 0 array, performance scales relatively well, though the limitation we saw in some of our other synthetic benchmarks like HDTach and ATTO, manifests itself again, with a two drive RAID setup offering a solid performance trend even compared to the four drive array.
For the Fusion-io ioDrive, as we tax it with a larger number of requests, its performance curve just shoots for the moon. Though we didn't plot it here, if we turned up the number of outstanding IOs to 2048, the drive actually exceeded its 100K IOPS theoretical top-end performance (103K IOPS to be exact) - which we will of course admit is just an insane amount of available bandwidth.