Our Test Methods: Under each test condition, the SSDs tested here were installed as secondary volumes in our testbed, with a separate drive used for the OS and benchmark installations. Our testbed's motherboard was updated with the latest BIOS available at the time of publication and AHCI mode was enabled for the host drive.
The SSDs were secure erased prior to testing (when applicable), and left blank without partitions for some tests, while others required them to be partitioned and formatted, as is the case with the ATTO, PCMark, and CrystalDiskMark tests. Windows firewall, automatic updates, and screen savers were all disabled before testing and Windows 10
Quiet Hours were enabled. In all test runs, we rebooted the system, ensured all temp and prefetch data was purged, waited several minutes for drive activity to settle, and for the system to reach an idle state before invoking a test. Also note that all of the drives featured here were tested with their own NVMe
drivers installed where possible / available.
|HotHardware Test System
|Intel Core i7 and SSD Powered
Video Card -
|Intel Core i7-8700K
Gigabyte Z370 Ultra Gaming
(Z370 Chipset, AHCI Enabled)
Intel HD 630
16GB G.SKILL DDR4-2666
Integrated on board
Corsair Force GT (OS Drive)
Toshiba RC100 (240GB / 480GB)
Intel SSD 660p (1TB)
Intel SSD 760p (512GB)
Toshiba XG6 (1TB)
OCZ RD400 (1TB)
WD Black NVMe (1TB)
Samsung SSD 970 EVO (1TB)
Chipset Drivers -
|Windows 10 Pro x64
Intel 10.1.1.45, iRST 220.127.116.116
HD Tune v5.70
CrystalDiskMark v6.0.0 x64
PCMark Storage Bench 2.0
|I/O Subsystem Measurement Tool
As we've noted in previous SSD articles, though IOMeter is clearly a well-respected industry standard drive benchmark, we're not completely comfortable with it for testing SSDs. The fact of the matter is, though our results with IOMeter appear to scale, it is debatable whether or not certain access patterns, as they are presented to and measured on an SSD, actually provide a valid example of real-world performance. The access patterns we tested may not reflect your particular workload, for example. That said, we do think IOMeter is a reliable gauge for relative available throughput with a given storage solution. In addition, there are certain higher-end workloads you can place on a drive with IOMeter that you can't with most other storage benchmark tools available currently.
In the following tables, we're showing two sets of access patterns; a custom Workstation pattern, with an 8K transfer size, consisting of 80% reads (20% writes) and 80% random (20% sequential) access and a 4K access pattern with a 4K transfer size, comprised of 67% reads (33% writes) and 100% random access. Queue depths from 1 to 32 were tested, though keep in mind, most consumer workloads usually reside at low queue depths...
The Toshiba XG6 performed right about in the middle of the pack with the two IOMeter access patterns we used. The drive was significantly faster the the OCZ RD400 and Intel 660p here, and was right in the mix with all of the other higher end drive, save for the Samsung 970 EVO, which led the pack overall.
If we focus on available bandwidth and latency at QD1, the Toshiba XG6 once again finished about in the middle of the pack. Latency is much better than the RD400
, but is higher than the Samsung, WD, and Intel 760p drives. And bandwidth trails the same drives as well.
|AS SSD Compression Benchmark
|Bring Your Translator: http://bit.ly/aRx11n
Next up we ran the Compression Benchmark built-into AS SSD, an SSD specific benchmark being developed by Alex Intelligent Software. This test is interesting because it uses a mix of compressible and non-compressible data and outputs both Read and Write throughput of the drive. We only graphed a small fraction of the data (1% compressible, 50% compressible, and 100% compressible), but the trend is representative of the benchmark’s complete results.
The compressibility of the data being transferred across the XG6 has minimal impact on performance. Here, we see the drive finishing in the upper echelon, especially in the write test.