Fusion-io vs Intel X25-M SSD RAID, Grudge Match Review

3 thumbs up
We've taken a detailed look at Intel's MLC flash-based X25-M SSDs a couple of times here already and we've found that they are one of the all around fastest SSD drives currently on the market. That being said, the obvious potential for performance scaling with setting up four of these drives in a RAID 0 configuration is promising, since theoretically we could achieve up to 1GB/sec of read bandwidth and 280MB/sec for writes.  Though our available write throughput would be lower than the Fusion-io solution, available read bandwidth of a 4xRAID 0 array of these drives is enormous to be sure.

On a side note, we're also sure many desktop end users have found themselves clamoring for a mechanical solution inside a standard ATX chassis, that can support mounting an SSD with its 2.5" hard drive form-factor, much less four of them in tandem.  We discovered a solution to this problem that is both elegant and highly functional, especially given a multi-drive installation.

  

Intel X25-M Series and Supermicro M14T-B 2.5" Drive Cage
Specifications and Features

Capacity
80GB and 160GB
NAND Flash Components
Intel Multi-Level Cell (MLC) NAND Flash Memory
10 Channel Parallel Architecture with 50nm MLC ONFI 1.0 NAND
Bandwidth
Up to 250MB/s Read Speeds
Up to 70MB/s Write Speeds
Read Latency
85 microseconds
Interface
SATA 1.5 Gb/s and 3.0 Gb/s
Form factor
1.8" Industry Standard Hard Drive Form Factor
2.5" Industry Standard Hard Drive Form Factor
Compatibility
SATA Revision 2.6 Compliant. Compatible with SATA 3.0 Gb/s with Native Command Queuing and SATA 1.5 Gb/s interface rates
Life expectancy
1.2 million hours Mean Time Before Failure (MTBF)
Power consumption
Active: 150mW Typical (PC workload)
Idle (DIPM): 0.06W Typical
Operating shock
1,000G / 0.5ms
Operating temperature
0°C to +70°C
ROHS Compliance
Meets the requirements of EU RoHS Compliance Directives
Product health monitoring
Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) commands plus additional SSD monitoring





Toggle this check-box (right) if your numbers look low...

Above you can see we've found a four bay 2.5" hot-swappable drive cage from Supermicro that fits nicely into a standard 5.25" drive bay.  A single 4-pin molex power connector provides power for the entire cage and a single 4-to-1 SATA cable allows connectivity for each drive in the cage to all of the host controller ports that are required.  We decided to plug our four SSDs right into the ICH10R SATA controller on an Intel X58-based motherboard.  Finally, there is a small high speed fan in the back of the unit which can be disabled with a jumper setting thankfully, since the fan itself is really loud and SSDs don't need much airflow and produce very little heat comparatively to their 2.5" spinning drive counterparts.  Even with the fan completely disabled, thermals in the cage were just fine with the SSD 4-pack.  And of course, this rendered our new high-end SSD RAID storage solution completely silent.

Once we had everything installed mechanically, it was time to setup our RAID array and initialize the volume for testing in our operating system (Vista 64-bit).  One small snafu that plagued our benchmark results was the Vista "Enable advanced performance" option you see captured in the screen shot above.  We found on some benchmark runs, particularly with HD Tach, that a performance degradation occurred that was inexplicable. It was only after we unchecked this box, that we saw performance return to expect levels for the configuration we were testing.  Re-checking the box after this had little affect on performance.  This anomaly was observed very consistently, so much so that we'd suggest unchecking this box if you're installation includes an Intel SSD.  We have alerted Intel to this issue and are awaiting a further detail.  Regardless, we're confident in the following benchmark numbers you're about to see with the setup we had configured for testing.

Article Index:

Prev 1 2
0
+ -

The problem I have with that is that it's volatile DRAM. I'm not sure how it works but does it have ROM on board to keep whatever image is on it from disappearing when the power goes down?

0
+ -

Hi Dave,

Yes, you are right about that with the DRAM, it will lose all data if power is gone.  Therefore they have a battery pack (it charges everytime you turn on the computer).   The manufacture says the battery pack will able to keep the data in the DRAM for 2-3 years after one full charge.  The good thing is you can make use of your old DIMMs are make it a very fast 32G drives.

0
+ -

Well I had the money, still wouldn't buy it. I would get  Area 1680ix with 4-6x vertex drivers. We be cheaper(ok, only by a grand or two) and it is bootable. This would be excellent if they could make it bootable.

0
+ -

LaMpiR:

Well I had the money, still wouldn't buy it. I would get  Area 1680ix with 4-6x vertex drivers. We be cheaper(ok, only by a grand or two) and it is bootable. This would be excellent if they could make it bootable.

I'm told the ioXtreme (the coming next gen drive) will be bootable and though still around $10/G, will at least come in at around $895 for the 80G drive.  That said, not sure I'd waste 15 - 20G on an OS install just so it could boot faster but I'd load up all the apps and games I could on it for load time and responsiveness.

0
+ -

I would think, all it would take is to put a PCI-E connection on the ioFusion card purely for booting purposes. Once it's booted and drivers are loaded it would switch to the PCI-E bus. It would be a simple fix.

0
+ -

acarzt:

I would think, all it would take is to put a PCI-E connection on the ioFusion card purely for booting purposes. Once it's booted and drivers are loaded it would switch to the PCI-E bus. It would be a simple fix.


 

No this is definitely more of a firmware/BIOS compatibility thing.  A machine can boot off any PCIe target already.   That target just has to broadcast itself as bootable to the system.  Or at least I think that's the way it works, in layman's terms.  Smile

0
+ -

I am curious why you used X25-M's & not Intel's X25-E. Would you have expected a big different had you done that?

0
+ -

Hi Terence,

Well, it's the simple fact that Intel hasn't been sending many of these drives out to the press unfortunately. Those drives are also crazy expensive, 32GB for $348. Likely write speeds would have improved dramatically, though reads, not as much.

Prev 1 2
Login or Register to Comment
Post a Comment
Username:   Password: