LSI WarpDrive 300GB PCI Express SSD Review

Article Index

Performance Summary and Conclusion

Performance Summary:
The LSI WarpDrive offered very strong performance, keeping close pace and occasionally outpacing the performance offered by Fusion-io's ioDrive line of SLC-based products.  Its general weakness was in large sequential reads whereas it's strengths are more than obvious in high load random read/write requests and with respect to random writes in general. The WarpDrive offered some of the highest IOMeter response throughput scores we've measured to date in our workstation and database tests, and continued to prove-out the scores we took in synthetic testing by posting some of the best PCMark Vantage scores we've seen to date as well, again with benchmark records broken in write-intensive tests.  There were some oddities observed, however, specifically in larger sequential transfers (ATTO testing).



Regardless,   We've seen many PCI Express SSD products in our labs, that make use of standard hardware RAID controllers in conjunction with off-the-shelf SSD solutions based on third party controllers like SandForce.  We've seen mixed results from products like this, that ranged from yawn-inspiring to jaw-dropping exciting. The LSI WarpDrive is an example of the latter and it comes tuned with a legacy of high throughput server performance backing it up with LSI's excellent SAS controller technology behind it.  Make no mistake, at a current street price of $7400 - $8900 or so for 300GB of capacity, we're talking roughly $25 per Gigabyte, so there is no way the end user enthusiast crowd is going to justify the cost of this beast.  However, when you consider its closest competitor performance and reliability-wise (remember, we're considering SLC-based SSD reliability here), that being the 160GB Fusion-io at around $8,000 and about a $50/GB cost model currently, the WarpDrive's price tag, relatively speaking is, dare we say, very competitively priced? 

LSI actually has a TCO (Total Cost of Ownership) calculator that you can play with.  Essentially it illustrates how many hard drives you would need to be able to scale the IO response rate of the LSI WarpDrive.  The metric used here is page requests per second, in a web server environment.  At the lowest workload of 1000 pages per second, LSI claims you'll need $17000 worth of hard drives to match the WarpDrives performance.  Looking at the numbers quickly, we're assuming they're using 73GB 15K RPM drives (standard fare in web server setups) to come to this calculation.  In short, if you need this kind of throughput in your datacenter or in your workstation where crunching large volumes of video media means time and money, then the WarpDrive offers some of the fastest SSD technology money can buy currently.

Short of that, the practicality of the device for most client applications if obviously not there, but we're hoping that it's obvious to you that this type of technology isn't something you're going to need for a round of Call of Duty with your buddies on a Saturday night. Clearly the product is intended for more mission critical applications; not that packing a clip and slugging it out with a virtual Fidel Castro isn't a noble cause of course.  Regardless, we're hopeful that LSI will work to improve upon sequential read performance and work out some of the anomalies we saw with the WarpDrive. It's very early on in the product's introduction so there's likely a bit more performance they can wring out of this already impressive PCI Express SSD. In the mean time, dropping one of these bad boys into a high performance database server will definitely skyrocket your response times and total bandwidth, you can bet on that.  And with LSI's proven track record in storage, reliability won't need to suffer as a result.



  • Great high load random read/writer performance
  • Fastest IOMeter scores we've recorded yet under high queue depths
  • Bootable
  • NVSRAM for fast write caching in the event of power loss or reset
  • Best of class write performance
  • Claimed 2 million hrs MTBF
  • Crazy expensive SLC design
  • Complex design with more possible points of failure
  • Still a few driver and firmware bugs holding back performance in certain cases


Related content