Fusion-io vs Intel X25-M SSD RAID, Grudge Match Review

Article Index:   
As we mentioned, a disruptive technology is something that not only sets a new standard of performance, features, or both, but often does so in a way that drives a complete resurfacing of the competitive landscape.  Clearly, that's what NAND Flash storage technology is affording us and there are some rather bright minds out there looking to take advantage of the opportunity.  Fusion-io currently offers an enterprise-class PCI Express SSD card that boasts ridiculously high read and write throughput numbers, but let's consider why and how for a bit before taking a closer at the technology.

First, let's look at what exists in current desktop storage architectures (just to keep it simple) and how things are hooked up.  Currently, whether you're plugging in an SSD or standard spinning hard drive, you're working with two interfaces to get the data off the drive to the host processor.  The SATA (Serial ATA) interface needs to be accommodated so the drive can be accessed via the legacy SATA command set.  Over the years, though the ATA command set has migrated from PATA (Parallel ATA) to a higher speed serial interface, the low level command set hasn't changed in order to maintain backwards compatibility.  Traditionally, the Southbridge controller on the motherboard (or perhaps on a discrete SATA controller) offers a number of SATA ports to attach your hard drives too. But how does the SATA controller bolt up to the rest of the system architecture?

It's called bridging.  The SATA controller, whether it resides on a discrete card or in the Southbridge chipset, has to be bridged to PCI Express or other native interface so the host CPU can have access to the data.  It's not rocket science obviously.  That's why they call it a SouthBRIDGE.  However, what does bridging do for us, besides affording the host processor the ability to talk to an otherwise foreign or "not native" interface (SATA) over a native one (PCIe, etc.)?  In short, nothing.  It just adds latency and slows things down.  Even when bridging two high speed serial interfaces together like SATA and PCI Express, you're adding latency going from one domain to the other.  It's that simple.  Again, this is a necessary evil however, because we're not going to just rip out generations of ATA command set compatibility that easily.  And yes, an even faster 6Gb/sec third generation SATA interface is coming. 

However, with solid state technology, like NAND Flash, at our disposal, it becomes much easier to just bolt up to the native PCI Express interface and eliminate the latencies of bridging to SATA, as well as the current bandwidth limitations of 3Gb/sec SATA, which SSDs are already very close to saturating.  It is with this disruptive approach that Fusion-io has entered the market.

Fusion-io 160GB ioDrive
Specifications and Features

Capacity 160GB (80GB and 320GB MLC available)
NAND Flash Components Single-Level Cell (SLC) NAND Flash Memory
Bandwidth Up to 750MB/s Read Speeds
Up to 650MB/s Write Speeds
Read Latency 50 microseconds
Interface PCI-Express X4
Form factor Half Height PCIe Card
Life expectancy 48yrs - at 5TB write-erase/day
Power consumption Meets PCI Express x4 power spec 1.1
Operating temperature -40°C to +70°C
ROHS Compliance Meets the requirements of EU RoHS Compliance Directive



 
 

To look at the card itself you'll note its pure simplicity and elegance.  There are but a few passive components along with a bunch of Samsung SLC NAND Flash and Fusion-io's proprietary ioDrive controller.  The 160GB card we tested above is a half-height PCI Express X4 implementation.  From a silicon content standpoint it's a model of efficiency, though the PCB itself is a 16 layer board, which is definitely a complex quagmire of routing for all those NAND chips.


ioDrive controller block diagram

Dropping down to the block diagram of the ioDrive's controller, we see the PCIe Gen1 X4 interface on the card offers 10Gbps of bandwidth, bi-directional (20Gbps total), which obviously offers more than enough headroom for data throughput as NAND Flash technology continues to scale.  The picture above is an over-simplification, however, and the real magic of Fusion-io's proprietary technology resides in the "Flash Block Manager" block of this diagram.  The design implements a 25 parallel channel (X8 banks) memory architecture, with one channel dedicated to error detection and correction, as well as self-healing (data recovery and remapping) capabilities for the flash memory at the cell level.  By way of comparison, Intel's X25-M SSD implements a 10 channel design.

In totality, the solution, along with its Samsung flash memory, is specified as offering up to 750MB/sec of available read operation throughput and 650MB/sec for writes.  You'll also note that the ioDrive is rated for 50 microseconds read latency, which is pretty much standard for SLC flash-based SSDs these days.  If you consider the average standard hard drive is specified for 8 - 15 milliseconds access times, it's obvious SSD technology is orders of magnitude faster for random access requests.


Fusion-io's ioManager Control Panel
Low-level formats performed before each benchmark run...

Above is a screen shot of Fusion-io's rather simplistic ioManager software tool for managing the ioDrive volume in our Windows Vista 64-bit installation.  We should note that there are three options for configuring the drive - 1: Max Capacity, 2: Improved Write Performance (at the cost of approximately 50% capacity) and 3: Maximum Write Performance (at the cost of approximately 70% capacity).  We low-level formatted the ioDrive before each new benchmark test with option 1, since we felt this would likely be the most common usage model.

Image gallery

Related content

Comments

Comments
Lev_Astov 5 years ago

Nice article. I really can't wait till the Fusion-io drives come down in price. I also really like that Supermicro 2.5" rack, which I think you mean fits into a 5.25" bay, not 3.5". I currently have two SSDs TAPED to the inside of my case so you can see them facing the window.

On a related note, do you think you do do a similar set of tests on different RAID controllers? I noticed you said your X58 southbridge was faster than the controller cards you had in the lab, which is odd. I really want to know the benefits of using something like the crazy Areca RAID controllers with their own upgradeable RAM sticks, like the ones DV Nation have.

Dave_HH 5 years ago

Hey Lev,

We actually tried an Acreca 1210 card with 256MB of on board cache and the Intel array. It was actually slower than the ICH10R believe it or not. I was surprised too. However, it doesn't take much heavy lifting for RAID 0 an Intel probably has their Southbridge chipset and drivers tuned pretty well for their own SSD, so perhaps it's not all that surprising. However, with a RAID 5 setup, you definitely want hardware RAID of course.

Dave_HH 5 years ago

And Lev, we caught that typo within like 3 seconds of go-live. You're QUICK man! LOL

acarzt 5 years ago

What kind of alignment did you guys do on those Intel drives? And did you guys enable write-back cache? Those Intel drives didn't seem to scale as good as they should have after 2 were installed. Changing the alignment alone could result in some large gains. I know they are probably hitting the limits of the board, but i'm still curious if you could squeeze a little more out of them.

I didn't think that ioFusion card would be bootable. Once these kind of cards start to catch on tho... that might change with some BIOS updates, etc.

bob_on_the_cob 5 years ago

You guys have 4 X25-Ms. Me wants.

Anyway thats some crazy performance on the fusion io. I can't wait until stuff like that becomes affordable on the consumer end.

Dave_HH 5 years ago

Bob, stick around for another few weeks. We'll be looking at the ioXtreme drive from Fusion-io very soon. It will be priced in the hundreds range, rather than thousands. :)

bob_on_the_cob 5 years ago

[quote user="Dave_HH"]

Bob, stick around for another few weeks. We'll be looking at the ioXtreme drive from Fusion-io very soon. It will be priced in the hundreds range, rather than thousands. :)

[/quote]

Thats a bit more intresting. A solid SSD would perfect my desktop I think.

Dave_HH 5 years ago

Acarzt, write-back cache was definitely enabled and as far as alignment goes, the drives were setup with 128K stripe (default for RAID 0 on the ICH10R) and formatted with defaults for NTFS.

acarzt 5 years ago

Ahh I see... That's exactly how I have mine set... it's amazing the differece write-back cache makes.

About the alignment tho... I was talking about the volume itself after creating the RAID. I don't know if this applies to the Intel drives, but I did this with my raid...

http://www.ocztechnologyforum.com/forum/showthread.php?t=53756

Vista has a default allignment of 1024. 128k is more common for RAID'd drives. I've seen guys pick up over 100MB/s by changing it. So instead of

"create partition primary align 64"

like in the walkthrough... you would use...

"create partition primary align 128"

It's worth a shot... it might make a difference :-P

bchiu 5 years ago

Awesome review, $7200 wouldn't really justify for any home-use though. I saw a PCI-E add-on card where it has 8 DIMM slot for you to add old memory modules and make it into a cheap fast storage, non-bootable, and it only takes 32GB, MSRP at $499 (without the ram).

Dave_HH 5 years ago

The problem I have with that is that it's volatile DRAM. I'm not sure how it works but does it have ROM on board to keep whatever image is on it from disappearing when the power goes down?

bchiu 5 years ago

Hi Dave,

Yes, you are right about that with the DRAM, it will lose all data if power is gone.  Therefore they have a battery pack (it charges everytime you turn on the computer).   The manufacture says the battery pack will able to keep the data in the DRAM for 2-3 years after one full charge.  The good thing is you can make use of your old DIMMs are make it a very fast 32G drives.

LaMpiR 5 years ago

Well I had the money, still wouldn't buy it. I would get  Area 1680ix with 4-6x vertex drivers. We be cheaper(ok, only by a grand or two) and it is bootable. This would be excellent if they could make it bootable.

Dave_HH 5 years ago

[quote user="LaMpiR"]

Well I had the money, still wouldn't buy it. I would get  Area 1680ix with 4-6x vertex drivers. We be cheaper(ok, only by a grand or two) and it is bootable. This would be excellent if they could make it bootable.

[/quote]

I'm told the ioXtreme (the coming next gen drive) will be bootable and though still around $10/G, will at least come in at around $895 for the 80G drive.  That said, not sure I'd waste 15 - 20G on an OS install just so it could boot faster but I'd load up all the apps and games I could on it for load time and responsiveness.

acarzt 5 years ago

I would think, all it would take is to put a PCI-E connection on the ioFusion card purely for booting purposes. Once it's booted and drivers are loaded it would switch to the PCI-E bus. It would be a simple fix.

Dave_HH 5 years ago

[quote user="acarzt"]

I would think, all it would take is to put a PCI-E connection on the ioFusion card purely for booting purposes. Once it's booted and drivers are loaded it would switch to the PCI-E bus. It would be a simple fix.

[/quote]
 

No this is definitely more of a firmware/BIOS compatibility thing.  A machine can boot off any PCIe target already.   That target just has to broadcast itself as bootable to the system.  Or at least I think that's the way it works, in layman's terms.  Smile

terence.redrocket 5 years ago

I am curious why you used X25-M's & not Intel's X25-E. Would you have expected a big different had you done that?

Dave_HH 5 years ago

Hi Terence,

Well, it's the simple fact that Intel hasn't been sending many of these drives out to the press unfortunately. Those drives are also crazy expensive, 32GB for $348. Likely write speeds would have improved dramatically, though reads, not as much.

Post a Comment
or Register to comment