OCZ IBIS HSDL Solid State Drive Preview - HotHardware

OCZ IBIS HSDL Solid State Drive Preview

2 thumbs up

The primary differentiator of the IBIS drive is OCZ's new HSDL (High Speed Data Link) interface. OCZ notes that HSDL leverages existing PCI Express technologies by bonding four PCI Express compatible lanes together into a single high speed serial interface capable of up to 20Gb/sec throughput.

The chart above notes the new interface's bandwidth advantage over existing technologies such as 6Gbps SATA, Serial Attached SCSI and Fibre Channel.  There's no question 10Gbps is boatload of bandwidth and in 2011 OCZ claims they'll be able to double that.

The implementation for the IBIS drive we're testing here is a single HSDL connection to a single port HSDL X4 PCI Express adapter card.  We'll look at the hardware level technologies employed shortly but, at a high level, the HSDL interface is comprised of 4 LVDS (Low Voltage Differential Signaling) pairs (8 total) bonded together in a single channel.  High speed LVDS pairs are used in a myriad of serial interconnect technologies from HyperTransport, to Firewire, SCSI, SATA, RapidIO and of course good ol' PCI Express. LVDS is simply the physical medium to transmit a low voltage signal over copper. OCZ reports that the HSDL interface utilizes an 8/10b bit encoding scheme much like PCI Express to transmit its data and also utilizes a PCI Express logic layer, so it's compatible with existing PCI Express architectures.  

 

 

 

 

 

Though we're testing with a single port adapter card, OCZ also will be offering a 4-port card and as you can see in the diagrams above you can do some pretty interesting things with it.  Future incarnations of IBIS drives could be super high-bandwidth four port SSDs with theoretically up to 40Gbps of available bandwidth for read/write transactions.  Where we come from, we'd call that drinking from the firehose but of course these are all just theoretical numbers until we see the design running on a test bench.  Finally, traditional RAID arrays can also be built from multiple IBIS SSDs as well, as is illustrated in the bottom diagram here and of course, you could just have multiple individual IBIS volumes installed in a system.

Article Index:

0
+ -

>   offers up to 20Gb/sec peak bandwidth over an industry standard SAS connector,

>  which is over three times that of next gen 6Gbps SATA technology.

 


The statement above is a bit misleading:

 

What you call "an industry standard SAS connector" is actually

four SAS cables bundled into a single multi-lane cable.

 

See Highpoint's RocketRAID 2720 and its bundled cable,

for an identical example:  one end could just as easily

"break out" into 4 x SATA/6G connectors.

 

Using the current 6G standard, four such channels have

a combined raw (peak) bandwidth of 6 Gbps x 4  =  24 Gbps

NOT 20 Gbps.

 

Moreover, if the obsolete 10/8 protocol were eliminated,

almost 20% of its overhead can be eliminated too,

resulting in an "effective" bandwidth of about 750 MB/sec per channel

or a combined bandwidth of 750 MB/sec x 4  =  3.0 GB/sec

because the one start bit and one stop bit for every byte transmitted

have been replaced with a much more efficient ECC data structure.

 

(6 Gbps / 10  =  600 MB/s;  6 Gbps / 8 = ~750 MB/s)

 

For examples, compare Western Digital's 4K "advanced format"

and the upcoming spec for PCI-Express 3.0:  128/130 jumbo frames

at the internal bus level.

 

Now, ramp up the single-channel speed to 8 Gbps planned for PCI-E Gen3,

and we get 4 channels @ 1 GB/sec  =  4 GB/second raw bandwidth.

(If the chipset's internals oscillate at 8 GHz, there is no good reason

why the connecting cables should not do likewise.)

 

Yes, the use of all those bridging chips appears to be obscuring

the true potential of quad-channel ("QC") serial data transmissions (SAS & SATA).

 

p.s.  Our guess is that the IT industry is riding out our national depression

by maximizing the profit margin on their SSD products:  some lawyers

would refer to that practice as "price fixing".

 

MRFS

 

 

 

0
+ -

Correction:  I did focus on the 4 cables bundled into one SFF cable,

but the 20 Gbps correctly derives from the x4 Gen2 edge connector i.e.:

x4 @ 5 Gbps  =  20 Gbps.

 

So, it is "20Gb/sec peak bandwidth" over the x4 Gen2 edge connector,

but NOT "over an industry standard SAS connector".

 

I apologize for causing any confusion.

 

MRFS

 

MRFS

0
+ -

While it does look really interesting why not just throw it on a PCI-e card like the Fusion-io? Seems overly complex and adds a ton of clutter to the case. 

I have to say against MRFS though that SSDs are coming down in price pretty fast. While not as fast as I would like consumer ones are starting to come close to that $1 a GB mark. I remember a few years back when a hard drive at $1 a GB was a really good deal. So I think the future is looking pretty good for SSD pricing. 

0
+ -

Good points, Bob!

Given the market prices, I would prefer either of the following, for their flexibility

(prices are today's Newegg):

 

1 x RocketRAID 2720 x8 Gen2 edge connector:  $225

4 x 60GB OCZ Vertex 2 @ $155  =  $620

Total:  $845

-OR-

4 x 64GB Crucial RealSSD C300 @ $143  =  $572

Total:  $797

 

And, I'm honestly waiting for 6G SSDs that also support TRIM in all RAID modes

(Intel's RST still does not do so!)

 

If Nand Flash chips will eventually wear out, at least we should be able to

do efficient garbage collection on RAID arrays before that happens,

without needing to reformat and start over!

 

MRFS

0
+ -

My concern on this would be having to use a spare PCI-E slot. If you have a sound card or say a TV Tuner card those slots gets filled quickly. 

 

0
+ -

FYI:  Anand Shimpi's review is here:

http://www.anandtech.com/show/3949/oczs-fastest-ssd-the-ibis-and-hsdl-interface-reviewed/1

 

"The 1-port PCIe card only supports PCIe 1.1,

while the optional 4-port card supports PCIe 1.1 and 2.0 and

will auto-negotiate speed at POST."

 

MRFS

0
+ -

MRFS:
the optional 4-port card supports PCIe 1.1 and 2.0 and will auto-negotiate speed at POST

Too bad it's a PCI-E connected device. Those of us with Intel Lynnfield systems only have a limited amount of bandwidth to play with on the PCI-E bus, and any additional devices that are connected to the bus will throttle back the Video card to X8 speeds, instead of X16.

I don't like this feature of the 1156 socket-P55 chipset design, but I have to admit that it's smokin' fast for what I'm doing.


0
+ -

As a electronic engineer, i decided to give a change on OCZ and installed one IBIS 160GB !

System crashed two times: one while updating Windows 7, and another time when introducing the key codes for MS Vioso Pro ! Error: 0x80070002

I made an img, but the day before i installed all app's !

So, re-installation, but taking the necessary time to check ALL possible User Guides from OCZ (of course !) , and reading all review about IBIS, OCZ ios recommending:

- You MUST set your BIOS ti use "S1 Sleep Mode" for proper operation, and

- Using S3 or AUTO may cause instability !

I have two VelociRaptors and interne SAS drives (NO RAID mode)

What heppens with the internal drives if i use for S1 mode ?

Is there any member who already use one of the IBIS drives ?

If the answer is YES, can you please tell me how you configured the BIOS.

The computer i use for testing the IBIS:

- Asus P6T WS Pro

- Intel i7 Core 965 Extreme 3.2GHz

- Kingston DDR3 12GB at 1600 GHz

- nVIDIA Quadro FX 4800

- PCIex 16

Thanks in advance for any (positive !) comment ;-)

Regards,

paralou

Login or Register to Comment
Post a Comment
Username:   Password: