> offers up to 20Gb/sec
peak bandwidth over an industry standard SAS connector,
> which is over
three times that of next gen 6Gbps SATA technology.
The statement above is a bit misleading:
What you call "an industry standard SAS connector" is actually
four SAS cables bundled into a single multi-lane cable.
See Highpoint's RocketRAID 2720 and its bundled cable,
for an identical example: one end could just as easily
"break out" into 4 x SATA/6G connectors.
Using the current 6G standard, four such channels have
a combined raw (peak) bandwidth of 6 Gbps x 4 = 24 Gbps
NOT 20 Gbps.
Moreover, if the obsolete 10/8 protocol were eliminated,
almost 20% of its overhead can be eliminated too,
resulting in an "effective" bandwidth of about 750 MB/sec per channel
or a combined bandwidth of 750 MB/sec x 4 = 3.0 GB/sec
because the one start bit and one stop bit for every byte transmitted
have been replaced with a much more efficient ECC data structure.
(6 Gbps / 10 = 600 MB/s; 6 Gbps / 8 = ~750 MB/s)
For examples, compare Western Digital's 4K "advanced format"
and the upcoming spec for PCI-Express 3.0: 128/130 jumbo frames
at the internal bus level.
Now, ramp up the single-channel speed to 8 Gbps planned for PCI-E Gen3,
and we get 4 channels @ 1 GB/sec = 4 GB/second raw bandwidth.
(If the chipset's internals oscillate at 8 GHz, there is no good reason
why the connecting cables should not do likewise.)
Yes, the use of all those bridging chips appears to be obscuring
the true potential of quad-channel ("QC") serial data transmissions (SAS & SATA).
p.s. Our guess is that the IT industry is riding out our national depression
by maximizing the profit margin on their SSD products: some lawyers
would refer to that practice as "price fixing".
While it does look really interesting why not just throw it on a PCI-e card like the Fusion-io? Seems overly complex and adds a ton of clutter to the case.
I have to say against MRFS though that SSDs are coming down in price pretty fast. While not as fast as I would like consumer ones are starting to come close to that $1 a GB mark. I remember a few years back when a hard drive at $1 a GB was a really good deal. So I think the future is looking pretty good for SSD pricing.
Good points, Bob!
Given the market prices, I would prefer either of the following, for their flexibility
(prices are today's Newegg):
1 x RocketRAID 2720 x8 Gen2 edge connector: $2254 x 60GB OCZ Vertex 2 @ $155 = $620Total: $845-OR-4 x 64GB Crucial RealSSD C300 @ $143 = $572Total: $797
And, I'm honestly waiting for 6G SSDs that also support TRIM in all RAID modes
(Intel's RST still does not do so!)
If Nand Flash chips will eventually wear out, at least we should be able to
do efficient garbage collection on RAID arrays before that happens,
without needing to reformat and start over!
My concern on this would be having to use a spare PCI-E slot. If you have a sound card or say a TV Tuner card those slots gets filled quickly.
Correction: I did focus on the 4 cables bundled into one SFF cable,
but the 20 Gbps correctly derives from the x4 Gen2 edge connector i.e.:
x4 @ 5 Gbps = 20 Gbps.
So, it is "20Gb/sec
peak bandwidth" over the x4 Gen2 edge connector,
but NOT "over an industry standard SAS connector".
I apologize for causing any confusion.
FYI: Anand Shimpi's review is here:
"The 1-port PCIe card only supports PCIe 1.1,
while the optional 4-port card supports PCIe 1.1 and 2.0 and
will auto-negotiate speed at POST."
MRFS: the optional 4-port card supports PCIe 1.1 and 2.0 and will auto-negotiate speed at POST
Too bad it's a PCI-E connected device. Those of us with Intel Lynnfield systems only have a limited amount of bandwidth to play with on the PCI-E bus, and any additional devices that are connected to the bus will throttle back the Video card to X8 speeds, instead of X16.
I don't like this feature of the 1156 socket-P55 chipset design, but I have to admit that it's smokin' fast for what I'm doing.
Dogs are great judges of character, and if your dog doesn't like somebody being around, you shouldn't trust them.
As a electronic engineer, i decided to give a change on OCZ and installed one IBIS 160GB !
System crashed two times: one while updating Windows 7, and another time when introducing the key codes for MS Vioso Pro ! Error: 0x80070002
I made an img, but the day before i installed all app's !
So, re-installation, but taking the necessary time to check ALL possible User Guides from OCZ (of course !) , and reading all review about IBIS, OCZ ios recommending:
- You MUST set your BIOS ti use "S1 Sleep Mode" for proper operation, and
- Using S3 or AUTO may cause instability !
I have two VelociRaptors and interne SAS drives (NO RAID mode)
What heppens with the internal drives if i use for S1 mode ?
Is there any member who already use one of the IBIS drives ?
If the answer is YES, can you please tell me how you configured the BIOS.
The computer i use for testing the IBIS:
- Asus P6T WS Pro
- Intel i7 Core 965 Extreme 3.2GHz
- Kingston DDR3 12GB at 1600 GHz
- nVIDIA Quadro FX 4800
- PCIex 16
Thanks in advance for any (positive !) comment ;-)
NEWS TIPS |
This site is intended for informational and entertainment purposes only. The contents are the views and opinion of the author and/or hisassociates. All products and trademarks are the property of their respective owners. All content and graphical elements areCopyright © 1999 - 2014 David Altavilla and HotHardware.com, LLC. All rights reserved. Privacy and Terms