I admit to a certain laziness in my advanced years. When I was editor of ExtremeTech years ago, I maintained a certain rigor about having dedicated testbeds for graphics and CPU. A dedicated test system requires some specific tender loving care to ensure you get reproducible results. In addition, you want the test system to allow the component under test to shine. So several key aspects need to be maintained:
- You need to keep the OS fairly clean. In the era of Windows 10 and SSDs, the OS doesn’t need to be pristine, but you need to be sure you don’t have a lot of background stuff running. That may seem obvious, until you realize that performance testing these days often require always-connected service applications such as Steam or Adobe Creative Cloud. If any of this class of apps start downloading updates in the background, that directly affects performance testing.
- The same goes true with Windows updates. If you’re testing on Windows 10, you really need to use Windows 10 Pro so you can have some control over update scheduling.
- Check to make sure no extraneous applications might be running while you’re running performance tests. For example, installing AMD Radeon graphics drivers often happily default to recording game videos while you play, which will adversely affect benchmarking.
- Check for VSYNC anomalies, such as Nvidia drivers with adaptive or fast sync enabled.
I’ve been out of the benchmarking game long enough that I needed to run performance tests on my production system, which is not ideal. While I’ve been careful to disable or mitigate performance-sucking background apps, doing so proved tedious every time I needed to run a performance test. An interesting side effect of swapping out numerous graphics cards included the need to re-authenticate Steam and Origin, as well as Chrome and some other online apps.
So I’m building a duplicate system as a dedicated test system. I believed with all sincerity that modern PCs running Windows 10 would result in highly similar benchmark results on identical systems, even if one system included applications cruft. I discovered I was wrong — the clean system generates performance test results roughly 5-7% better than the older system. On the other hand, I experienced some sense of relief when I found that the relative order of the results remained unchanged — the differences were 5-7% across the board.
However, I still believe testing on high-end, but mainstream systems offers more utility then testing on extreme PCs built around Intel Extreme Edition processors. A core i7-6700K running at 4GHz for single GPU testing is more than adequate, and having modern platform features instead of using last-gen chipsets common to extreme edition PCs makes them a bit more useful. I also believe in testing in actual PC cases, because that’s how you really know if a particular component will be overly loud, or run hot.
In case you’re interested, here’s the dedicated test system I built:
|Component||Brand / Type|
|CPU||Intel Core i7-6700K|
|Motherboard||Gigabyte GA-170X Gaming 7|
|Memory||16GB Corsair Vengeance LPX DDR4-2666|
|Power Supply||Corsair HX750i (new edition)|
|SSD||Crucial MX200 1TB|
|Secondary HDD||Western Digital 3TB Red|
|Case||Corsair Obsidian 550D|
So now I’m in the process of retesting my baseline GPUs, so the next review I write will include the new results, but the conclusions won’t differ, just the absolute numbers. Even so, I’m a little sheepish, because I should have known better. Even Windows 10 can get a little crufty with constant use. I’ve also imaged the baseline system so I can return to a known good state.
Of course, if you’re a hardware reviewer reading this, you’re probably laughing at me. But like all lessons in tech and life, sometimes you need to relearn them.