OCZ IBIS HSDL Solid State Drive Preview

rated by 0 users
This post has 8 Replies | 2 Followers

Top 10 Contributor
Posts 26,388
Points 1,192,450
Joined: Sep 2007
ForumsAdministrator
News Posted: Tue, Sep 28 2010 11:24 PM
As we've noted more than once here before, as NAND Flash technologies evolve, SATA will go the way of the dino. It's not going to happen over night but like its old, spinning hard drive counterpart, the writing is on the wall. The market needs new higher speed interfaces with lower overhead and more direct attachment to native system interfaces. OCZ has been trying its hand at developing PCI Express-based SSDs in an effort to address this requirement. They've been rolling out all new products like their Revo Drive that we looked at recently and the new device we'll be looking at here today. The new OCZ IBIS SSD utilizes a proprietary serial interface that the company has coined "HSDL" for High Speed Data Link and it offers up to 20Gb/sec peak bandwidth over an industry standard SAS connector, which is over three times that of next gen 6Gbps SATA technology. Journey on for all the details and a performance profile of a prototype we've been testing here in the lab...

OCZ IBIS HSDL Solid State Drive Preview

  • | Post Points: 80
Not Ranked
Posts 6
Points 60
Joined: Dec 2009
Location: Seattle, Washington, USA
MRFS replied on Wed, Sep 29 2010 10:10 AM

>   offers up to 20Gb/sec peak bandwidth over an industry standard SAS connector,

>  which is over three times that of next gen 6Gbps SATA technology.

 


The statement above is a bit misleading:

 

What you call "an industry standard SAS connector" is actually

four SAS cables bundled into a single multi-lane cable.

 

See Highpoint's RocketRAID 2720 and its bundled cable,

for an identical example:  one end could just as easily

"break out" into 4 x SATA/6G connectors.

 

Using the current 6G standard, four such channels have

a combined raw (peak) bandwidth of 6 Gbps x 4  =  24 Gbps

NOT 20 Gbps.

 

Moreover, if the obsolete 10/8 protocol were eliminated,

almost 20% of its overhead can be eliminated too,

resulting in an "effective" bandwidth of about 750 MB/sec per channel

or a combined bandwidth of 750 MB/sec x 4  =  3.0 GB/sec

because the one start bit and one stop bit for every byte transmitted

have been replaced with a much more efficient ECC data structure.

 

(6 Gbps / 10  =  600 MB/s;  6 Gbps / 8 = ~750 MB/s)

 

For examples, compare Western Digital's 4K "advanced format"

and the upcoming spec for PCI-Express 3.0:  128/130 jumbo frames

at the internal bus level.

 

Now, ramp up the single-channel speed to 8 Gbps planned for PCI-E Gen3,

and we get 4 channels @ 1 GB/sec  =  4 GB/second raw bandwidth.

(If the chipset's internals oscillate at 8 GHz, there is no good reason

why the connecting cables should not do likewise.)

 

Yes, the use of all those bridging chips appears to be obscuring

the true potential of quad-channel ("QC") serial data transmissions (SAS & SATA).

 

p.s.  Our guess is that the IT industry is riding out our national depression

by maximizing the profit margin on their SSD products:  some lawyers

would refer to that practice as "price fixing".

 

MRFS

 

 

 

  • | Post Points: 5
Top 10 Contributor
Posts 6,181
Points 90,135
Joined: Aug 2003
Location: United States, Virginia
Moderator

While it does look really interesting why not just throw it on a PCI-e card like the Fusion-io? Seems overly complex and adds a ton of clutter to the case. 

I have to say against MRFS though that SSDs are coming down in price pretty fast. While not as fast as I would like consumer ones are starting to come close to that $1 a GB mark. I remember a few years back when a hard drive at $1 a GB was a really good deal. So I think the future is looking pretty good for SSD pricing. 

  • | Post Points: 20
Not Ranked
Posts 6
Points 60
Joined: Dec 2009
Location: Seattle, Washington, USA
MRFS replied on Wed, Sep 29 2010 11:14 AM

Good points, Bob!

Given the market prices, I would prefer either of the following, for their flexibility

(prices are today's Newegg):

 

1 x RocketRAID 2720 x8 Gen2 edge connector:  $225

4 x 60GB OCZ Vertex 2 @ $155  =  $620

Total:  $845

-OR-

4 x 64GB Crucial RealSSD C300 @ $143  =  $572

Total:  $797

 

And, I'm honestly waiting for 6G SSDs that also support TRIM in all RAID modes

(Intel's RST still does not do so!)

 

If Nand Flash chips will eventually wear out, at least we should be able to

do efficient garbage collection on RAID arrays before that happens,

without needing to reformat and start over!

 

MRFS

  • | Post Points: 5
Top 500 Contributor
Posts 153
Points 1,705
Joined: Jul 2010
lonewolf replied on Wed, Sep 29 2010 12:34 PM

My concern on this would be having to use a spare PCI-E slot. If you have a sound card or say a TV Tuner card those slots gets filled quickly. 

 

  • | Post Points: 5
Not Ranked
Posts 6
Points 60
Joined: Dec 2009
Location: Seattle, Washington, USA
MRFS replied on Wed, Sep 29 2010 3:47 PM

Correction:  I did focus on the 4 cables bundled into one SFF cable,

but the 20 Gbps correctly derives from the x4 Gen2 edge connector i.e.:

x4 @ 5 Gbps  =  20 Gbps.

 

So, it is "20Gb/sec peak bandwidth" over the x4 Gen2 edge connector,

but NOT "over an industry standard SAS connector".

 

I apologize for causing any confusion.

 

MRFS

 

MRFS

  • | Post Points: 5
Not Ranked
Posts 6
Points 60
Joined: Dec 2009
Location: Seattle, Washington, USA
MRFS replied on Wed, Sep 29 2010 4:11 PM

FYI:  Anand Shimpi's review is here:

http://www.anandtech.com/show/3949/oczs-fastest-ssd-the-ibis-and-hsdl-interface-reviewed/1

 

"The 1-port PCIe card only supports PCIe 1.1,

while the optional 4-port card supports PCIe 1.1 and 2.0 and

will auto-negotiate speed at POST."

 

MRFS

  • | Post Points: 20
Top 10 Contributor
Posts 8,691
Points 104,390
Joined: Apr 2009
Location: Shenandoah Valley, Virginia
MembershipAdministrator
Moderator
realneil replied on Thu, Sep 30 2010 7:01 PM

MRFS:
the optional 4-port card supports PCIe 1.1 and 2.0 and will auto-negotiate speed at POST

Too bad it's a PCI-E connected device. Those of us with Intel Lynnfield systems only have a limited amount of bandwidth to play with on the PCI-E bus, and any additional devices that are connected to the bus will throttle back the Video card to X8 speeds, instead of X16.

I don't like this feature of the 1156 socket-P55 chipset design, but I have to admit that it's smokin' fast for what I'm doing.


Dogs are great judges of character, and if your dog doesn't like somebody being around, you shouldn't trust them.

  • | Post Points: 5
Not Ranked
Posts 2
Points 10
Joined: Jan 2009
paralou replied on Thu, Apr 14 2011 6:12 AM

As a electronic engineer, i decided to give a change on OCZ and installed one IBIS 160GB !

System crashed two times: one while updating Windows 7, and another time when introducing the key codes for MS Vioso Pro ! Error: 0x80070002

I made an img, but the day before i installed all app's !

So, re-installation, but taking the necessary time to check ALL possible User Guides from OCZ (of course !) , and reading all review about IBIS, OCZ ios recommending:

- You MUST set your BIOS ti use "S1 Sleep Mode" for proper operation, and

- Using S3 or AUTO may cause instability !

I have two VelociRaptors and interne SAS drives (NO RAID mode)

What heppens with the internal drives if i use for S1 mode ?

Is there any member who already use one of the IBIS drives ?

If the answer is YES, can you please tell me how you configured the BIOS.

The computer i use for testing the IBIS:

- Asus P6T WS Pro

- Intel i7 Core 965 Extreme 3.2GHz

- Kingston DDR3 12GB at 1600 GHz

- nVIDIA Quadro FX 4800

- PCIex 16

Thanks in advance for any (positive !) comment ;-)

Regards,

paralou

  • | Post Points: 5
Page 1 of 1 (9 items) | RSS