Our Test Methods: Under each test condition, the SSDs tested here were installed as secondary volumes in our testbed, with a separate drive used for the OS and benchmark installations. Our testbed's motherboard was updated with the latest BIOS available at the time of publication and AHCI mode was enabled for the host drive.
The SSDs were secure erased prior to testing (when applicable), and left blank without partitions for some tests, while others required them to be partitioned and formatted, as is the case with the ATTO, PCMark, and CrystalDiskMark tests. Windows firewall, automatic updates, and screen savers were all disabled before testing and Windows 10
Quiet Hours were enabled. In all test runs, we rebooted the system, ensured all temp and prefetch data was purged, waited several minutes for drive activity to settle, and for the system to reach an idle state before invoking a test. Also note that all of the drives featured here were tested with their own NVMe drivers installed where possible / available.
|HotHardware Test System
|Intel Core i7 and SSD Powered
Video Card -
|Intel Core i7-8700K
Gigabyte Z370 Ultra Gaming
(Z370 Chipset, AHCI Enabled)
Intel HD 630
16GB G.SKILL DDR4-2666
Integrated on board
Corsair Force GT (OS Drive)
Toshiba RC100 (240GB / 480GB)
ADATA XPG SX8200 (480GB)
OCZ RD400 (1TB)
WD Black NVMe (1TB)
Samsung SSD 970 EVO (1TB)
Chipset Drivers -
|Windows 10 Pro x64
Intel 10.1.1.45, iRST 220.127.116.116
HD Tune v5.70
CrystalDiskMark v6.0.0 x64
PCMark Storage Bench 2.0
|I/O Subsystem Measurement Tool
As we've noted in previous SSD articles, though IOMeter is clearly a well-respected industry standard drive benchmark, we're not completely comfortable with it for testing SSDs. The fact of the matter is, though our results with IOMeter appear to scale, it is debatable whether or not certain access patterns, as they are presented to and measured on an SSD, actually provide a valid example of real-world performance. The access patterns we tested may not reflect your particular workload, for example. That said, we do think IOMeter is a reliable gauge for relative available throughput with a given storage solution. In addition, there are certain higher-end workloads you can place on a drive with IOMeter that you can't with most other storage benchmark tools available currently.
In the following tables, we're showing two sets of access patterns; a custom Workstation pattern, with an 8K transfer size, consisting of 80% reads (20% writes) and 80% random (20% sequential) access and a 4K access pattern with a 4K transfer size, comprised of 67% reads (33% writes) and 100% random access. Queue depths from 1 to 32 were tested, though keep in mind, most consumer workloads usually reside at low queue depths...
The ADATA XPG SX8200
480GB drive we tested finished about in the middle of the pack overall with the access patterns we tested across the higher queue depths, but at QD1 (which is arguably the most important for consumer systems), the drive led the pack.
If we focus on available bandwidth and latency at QD1, the ADATA XPG SX8200 pull ahead of the other drives we tested. Peak bandwidth was the highest overall and latency was the lowest.
|AS SSD Compression Benchmark
|Bring Your Translator: http://bit.ly/aRx11n
Next up we ran the Compression Benchmark built-into AS SSD, an SSD specific benchmark being developed by Alex Intelligent Software. This test is interesting because it uses a mix of compressible and non-compressible data and outputs both Read and Write throughput of the drive. We only graphed a small fraction of the data (1% compressible, 50% compressible, and 100% compressible), but the trend is representative of the benchmark’s complete results.
The compressibility of the data being transferred across the ADATA XPG SX8200 has no impact on performance and throughout looks good, though it does trail the Samsung and WD