First, let's look at what exists in current desktop storage architectures (just to keep it simple) and how things are hooked up. Currently, whether you're plugging in an SSD or standard spinning hard drive, you're working with two interfaces to get the data off the drive to the host processor. The SATA (Serial ATA) interface needs to be accommodated so the drive can be accessed via the legacy SATA command set. Over the years, though the ATA command set has migrated from PATA (Parallel ATA) to a higher speed serial interface, the low level command set hasn't changed in order to maintain backwards compatibility. Traditionally, the Southbridge controller on the motherboard (or perhaps on a discrete SATA controller) offers a number of SATA ports to attach your hard drives too. But how does the SATA controller bolt up to the rest of the system architecture?
It's called bridging. The SATA controller, whether it resides on a discrete card or in the Southbridge chipset, has to be bridged to PCI Express or other native interface so the host CPU can have access to the data. It's not rocket science obviously. That's why they call it a SouthBRIDGE. However, what does bridging do for us, besides affording the host processor the ability to talk to an otherwise foreign or "not native" interface (SATA) over a native one (PCIe, etc.)? In short, nothing. It just adds latency and slows things down. Even when bridging two high speed serial interfaces together like SATA and PCI Express, you're adding latency going from one domain to the other. It's that simple. Again, this is a necessary evil however, because we're not going to just rip out generations of ATA command set compatibility that easily. And yes, an even faster 6Gb/sec third generation SATA interface is coming.
However, with solid state technology, like NAND Flash, at our disposal, it becomes much easier to just bolt up to the native PCI Express interface and eliminate the latencies of bridging to SATA, as well as the current bandwidth limitations of 3Gb/sec SATA, which SSDs are already very close to saturating. It is with this disruptive approach that Fusion-io has entered the market.
|Capacity||160GB (80GB and 320GB MLC available)|
|NAND Flash Components||Single-Level Cell (SLC) NAND Flash Memory|
|Bandwidth||Up to 750MB/s Read Speeds|
Up to 650MB/s Write Speeds
|Read Latency||50 microseconds|
|Form factor||Half Height PCIe Card|
|Life expectancy||48yrs - at 5TB write-erase/day|
|Power consumption||Meets PCI Express x4 power spec 1.1|
|Operating temperature||-40°C to +70°C|
|ROHS Compliance||Meets the requirements of EU RoHS Compliance Directive|
In totality, the solution, along with its Samsung flash memory, is specified as offering up to 750MB/sec of available read operation throughput and 650MB/sec for writes. You'll also note that the ioDrive is rated for 50 microseconds read latency, which is pretty much standard for SLC flash-based SSDs these days. If you consider the average standard hard drive is specified for 8 - 15 milliseconds access times, it's obvious SSD technology is orders of magnitude faster for random access requests.
Fusion-io's ioManager Control Panel
Low-level formats performed before each benchmark run...