OCZ RevoDrive Review: SSD RAID + PCI-Express - HotHardware
OCZ RevoDrive Review: SSD RAID + PCI-Express

OCZ RevoDrive Review: SSD RAID + PCI-Express

SSD manufacturers have been energetically rolling out new high-end, high-capacity products based around updated Indilinx, SandForce, or Marvell controllers, while simultaneously also introducing smaller SSDs with better performance and lower prices than we saw with first-generation products. As SSDs become more popular and economical, we're seeing the rise of yet another consumer storage tier, over and above even the more expensive, high-performing SSDs:  Flash storage mated to PCI-Express. OCZ has launched its own product into the burgeoning sector, the RevoDrive...

OCZ RevoDrive Review: SSD RAID + PCI-Express

0
+ -

Nice performance, but I would really like to see this compared to a pair of OCZ Vertex 2 60GB drives (two of those can be had for less money) in a RAID 0.  Since the RevoDrive suffers under all the issues with a RAID (along with a few mentioned from its older controller), I would like to see how it performs compared to a RAID made of drives from the same company.

Any chance you guys will be able to test this out for us?

0
+ -

Infinity,

All of the consumer-level RAID products are software RAID (aka FakeRAID), meaning they all rely on the CPU to do the heavy lifting. My guess is that while the RevoDrive probably picks up a bit of latency from its controllers and could be performance-bound in 4-way configurations by the SiI 3124's PCI-X interface, the pipe between the controller and the CPU is higher bandwidth/lower latency than we'd see using a SATA controller.

I suspect we'd see similar results if we benched the two against each other. Certainly that's what we'd expect to see--moving over to a conventional SATA array removes certain factors but potentially inserts others.

0
+ -

It would be more interesting to see if they would be able to make it operate as your system RAM?

0
+ -

Anima,

If you're referring to what Intel used to call "Robson Technology," -- the use of flash RAM to create a new, lower-latency data layer between your system and the hard drive, the answer is yes--Windows 7 detects the Revo and will offer to enable that technology.

Whether or not it would actually be worth doing so is an entirely different question. Existing hierarchy models (L1, L2, (L3), main memory, HDD cache, HDD) have slowly evolved over quite some time. Robson is still quite new, and its performance is dependent on multiple factors including the size of the cache, the cache interface speed, the HDD/SSD's speed, and the amount of available system memory.

A Windows OS with a great deal of installed RAM could theoretically take a performance hit from Robson or show no benefit. Systems with less RAM might show different results, or Robson might reduce power consumption.

As for using Flash RAM as RAM, you really wouldn't want to do that, even if you could. It's much, much slower, much higher latency, and much more expensive.

0
+ -

Joel,

How much of a performance hit (real world) would one have to take if they used this device on a Lynnfield socket P55 chipset based system? Assuming that more than one device installed at a time cuts the PCI-E bus speed (bandwidth) in half to X8 speeds instead of X16 on that platform.

0
+ -

My guess would be little to none neil.

SLI and Crossfire do the same thing yet they will still essentially double your performance.

I would assume it would not have any noticable impact on system performance.

0
+ -

RealNeil,

I'm thinking very little, for two reasons:

1) Remember, we're actually talking about PCI-Express 2.0 here. That means your x8 PCIe video card lane is effectively running at x16 PCIe 1.1. Historically, the performance difference in Crossfire/SLI rigs showed up when we compared x16/x4 connections (with the x4 lanes hanging off the Southbridge). (This configuration was both high-latency and exceededly lopsided--worst of both worlds.)

x8/x8 configurations, however, are generally just a tiny bit behind x16/x16, if behind at all. Oftentimes, you won't see even a shred of difference unless you're running tip-top cards.

0
+ -

"At 64-bits wide and an operating speed of 133MHz, the SiI 3124 RAID controller has a maximum throughput of 1.06GB/s(half-duplex). That's significantly less than the 800MB/s of full-duplex bandwidth a native PCIe x4 controller would've provided" - Quoted from the article.

I know it's not a typo on the bandwidth ( did the math myself lol) But 1.06GB is not less the 800MB lol

The problem is that it's Half duplex... so You can only get 1.06GB/s of bandwidth in 1 direction at a time... Which isn't a problem if you're only reading or only writing. But this would cause some latency issues when you are both reading and writing.

Full duplex allows you to use the full bandwidth in both directions at the same time.

I don't think "Significantly less" is the right way to describe the issue :-P

Anyway... can someone please explain to me how it's possible for this thing to be bootable?

Are new BIOS revisions able to search the PCI-E bus for bootable devices??

 

0
+ -

Acarzt,

I disagreed then, when I wrote it, and I disagree now. :P Also, booting off the PCI/PCIe bus has never been an issue--It was possible to install a RAID card ten years ago and boot off the thing.

Login or Register to Comment
Post a Comment
Username:   Password: