Maxtor DiamondMax 10: Exploring NCQ & RAID

 

End user demand for increased hard drive capacity is waning somewhat, now that we can store over 200,000 typical digital photos, or 60,000 music files, for less than a $100 street price. Sixty thousand music files could play for almost six months without repeating and you'd have to stay awake for almost 70 days straight if you look at each of those 200,000 photographs for 30 seconds each. That's not to say the need for increased storage capacity is waning in the enterprise, but given the choice of a faster 250GB hard drive or an average performing 300GB drive for desktop systems, more enthusiasts are likely to go with the relatively large, but faster 250GB drive. While the race for hard drive capacity on the desktop may be slowing to a simmer, the race for higher performance is heating up. New interfaces, higher bus transfer rates, faster rotational speeds and improved drive software are combining to deliver impressive performance.

One of the most recent additions on the software front is Native Command Queuing (NCQ). NCQ attempts to reduce latency and increase throughput by ordering the incoming read and write requests in the most efficient sequence to minimize head travel. For example, let's say a system asks for the data at position 20 on a drive, immediately followed by a request for more data at position 19. A drive without NCQ will read the data at position 20, and then wait for the platters to make nearly a complete rotation to read the data at position 19. The NCQ enabled drive will rearrange the requests for 20 > 19 into 19 > 20 so the drive can read the data more efficiently without the additional rotation.

On the hardware front, we have products like the latest integrated RAID controller offering from Intel on their ICH7 Southbridge. Back in 1998, when the rumblings of IDE RAID first began, most of us were running 430TX based motherboards with a maximum hard drive transfer rate of 33 MB/s. The first practical implementations of IDE RAID, such as the Promise FastTrak66, supported up to 4 drives in RAID 0, 1 and 10 formats and were limited by the bandwidth of the PCI bus. But today, high-end systems have far more available bandwidth and just about all enthusiast class motherboards include some form of integrated RAID functionality.

In this article, we are going to explore the performance of Maxtor's DiamondMax 10 hard drive as it compares to last year's DiamondMax 9, and the current SATA HD performance leader, the Western Digital Raptor WD740GD. We will then combine the drives in various quantities, RAID formats and stripe sizes to measure the performance impacts of these configurations at relatively similar price points. The RAID formats we will be using are:

 

RAID 1, which is basic mirroring, writes all data identically to two drives. Write performance is generally slightly lower than a single drive since data is written twice by two drives that may start in different relative positions. Read performance could be much higher since any read operation could be broken in half with each drive doing half the work, but typically, only a single drive performs read operations giving similar read performance to a single drive.


RAID 0, which is basic striping, takes all data read and write operations and spreads them across all drives equally. This is the highest performance version of RAID, but also increases the risk of data loss since any single drive failure of the array causes the loss of all data in the array.


RAID 5, which is striping with parity, is similar to RAID 0, except that each piece of data written to a drive has a piece of parity data written to another drive. In this configuration, loss of a single drive will not corrupt the entire array, but the overhead to calculate the parity data can have a significant impact on write performance.

 

Finally, we will discuss the impact of stripe sizes on RAID performance. The stripe size is set during array initialization. It represents the size of the pieces a file is dividend into for writing across the drives. For example, a 1 MB file written to a RAID 0 array using a 4K stripe size would be broken into 256 pieces that are spread across all the drives in the array. The same file using a 128K stripe size is only broken into 8 pieces. In RAID 5, the data is broken into pieces according to the stripe size, and additional pieces are calculated for parity error checking data.


Tags:  RAID, Maxtor, diamond, XP, XT, Tor, AI, id, AM

Related content