Today we’ve prepared one of those “for the sake of science” type of reviews that we enjoy compiling from time to time. The Radeon RX 6700 XT is AMD’s latest GPU that got recently reviewed, delivering RTX 2080 Ti-like performance. While this is an impressive GPU, it’s not particularly great in terms of value, but then that’s the theme of 2021.
When compared to the previous model, the 5700 XT, we’re looking at around 30% more performance on average, for a 20% increase in price, so not great value for a generational leap but still solid performance delivery. When you consider that the 5700 XT and 6700 XT feature the exact same core configuration using the exact TSMC 7nm manufacturing process, a 30% performance uplift is nothing short of amazing.
Typically, you’d expect gains this big when moving to a newer and superior manufacturing process, like what we saw for Nvidia’s Pascal architecture which moved to TSMC’s 16nm process from the 28nm process used by Maxwell. This enabled higher clock speeds which drastically improved performance and efficiency. Yet AMD managed to achieve something similar on the same process. But how do you get 30% faster?
There are a number of architectural differences between RDNA1 and RDNA2, the 6700 XT cores do have a slight IPC advantage, but the geometry performance is worse in discarding primitives, as the 6700 XT is only configured with half the primitive fixed function units when compared to the 5700 XT.
For now at least, the key performance advantage the 6700 XT has over the 5700 XT appears to be clock speeds. On paper, the 5700 XT is said to operate at a boost frequency of 1905 MHz, whereas the 6700 XT boost clock is 2581 MHz, a substantial 35% increase. In our testing, the difference is closer to 42% from what we observed when comparing the AMD reference models.
Before we get on with our Clock-for-Clock comparison.
This is what regular gaming performance looks like on these two GPUs, and where they stand in the overall market. For more benchmarks, see our full RX 6700 XT review.
Still, the Radeon RX 6700 XT should be clocking at least 35% higher and we saw on average a 30% performance uplift, so it would seem much of that gain is simply down to the architectural design that allows RDNA2 to clock higher. So that’s what we’re going to explore today. By clocking the 5700 XT and 6700 XT at the same frequency, if most of the performance uplift comes simply from the improved clock speeds, this test will easily highlight that and we can check if our suspicions prove true.
For testing we’re using AMD reference models. Both have been clocked at 1.8 GHz as we were able to maintain this frequency by maxing out the power target. At 1.9 GHz the 5700 XT frequency started to fluctuate quite a bit, often dropping down to 1.8 GHz. But at 1.8 GHz clock speeds for both models were very consistent, so this is about as apples-to-apples as you can get in terms of operating frequency.
While testing, the clock speed was monitored constantly to ensure that both GPUs were operating at the targeted frequency.
As for the memory, we’ve left that stock as that’s the best way to conduct this test. You might think that gives the 5700 XT an advantage as memory bandwidth is 17% higher thanks to the wider memory bus, but that’s not the case. If anything, the advantage is handed to the 6700 XT as it uses memory much more efficiently with better delta color compression and more importantly, Infinity Cache. Essentially, RDNA2 can achieve greater frame rates at a given bandwidth.
The Infinity Cache plays a key role, the 96MB on-chip cache functions a lot like an L3 cache on a CPU. This local cache buffers against reads and writes to the main memory and is much faster than having to work out of VRAM, helping to boost the memory bandwidth of the 6700 XT relative to the 5700 XT.
With all of that said, it’s now time to test these GPUs clock-for-clock. We’ll be doing so on our new Ryzen 9 5950X test system using 32GB of DDR4-3200 CL14 memory. Let’s get into it.
This is quite interesting, though not totally unexpected. In Watch Dogs Legion we’re seeing very similar performance when matched at the same clock speed.
The average frame rates are identical at all three tested resolutions though the 1% low performance for the 6700 XT is consistently better by a 4-9% margin, depending on the resolution.
The Assassin’s Creed Valhalla results are also very close, though this time the 5700 XT was slightly faster at all three tested resolutions, delivering an extra 2-3 fps.
Again both GPUs maintained a 1.8 GHz operating frequency, so the differences are down to the changes in architectural design.
The F1 2020 results were too close to call as we’re only looking at up to a 3% variation in the data. The 5700 XT was a few percent faster at 1080p and 1440p, while the opposite was true at 4K.
The Rainbow Six Siege results are a bit more interesting. At 1080p and 1440p the 6700 XT was ~3% slower than the 5700 XT, which is a negligible margin, but then at 4K we see the 6700 XT boosting performance by 8% over the 5700 XT. I suspect this improved 4K performance is due to the Infinity Cache.
We’re seeing a similar behavior in Shadow of the Tomb Raider. At 1080p the 6700 XT offers no advantage over the 5700 XT when matched clock-for-clock.
However, at 1440p the 6700 XT is up to 6% faster and then 10% faster at 4K. Again, I believe this is down to superior memory management.
Horizon Zero Dawn performance is pretty even across the board. The 6700 XT was a few frames slower at 1080p, much the same at 1440p and then a few frames faster at 4K, though overall you could say that the data is within the margin of error, despite the three-run average.
Testing with Death Stranding reveals identical performance between these two GPUs when clocked at 1.8 GHz. We’re looking at the exact same fps at all three tested resolutions.
Frame rates in Hitman 2 are also similar, though as the resolution increases the 6700 XT starts to pull ahead, if by a tiny 2-3 fps.
Finally, we have Cyberpunk 2077 and the performance trends are again similar, we’re looking at identical numbers at 1080p with the 6700 XT just managing to nudge ahead at the higher resolutions.
What We Learned
So there you have it, a quick and easy benchmark this one. Basically, in today’s games, the majority of the performance uplift for the Radeon RX 6700 XT is confirmed to come from the increased operating frequency of the GPU cores. It is a remarkable feat that AMD was able to make this jump on the same process node, though clearly they have experience working with TSMC’s 7nm beyond GPUs.
AMD had anticipated its goal with RDNA2 to achieve a 50% jump in perf-per-watt over RDNA1, and that it would be accomplished entirely with architectural improvements, not process improvements. We knew it was coming, but I have to admit I was being skeptical.
A lofty goal, but as far as we can tell they’ve done it with the 6700 XT, delivering 30% more performance than the 5700 XT, while reducing power usage by roughly 15%. With the next generation of games, we expect the margin between the 5700 XT and 6700 XT to widen, so that’ll be an interesting situation to monitor over the coming years.
In terms of performance, the 6700 XT is a necessary step forward from the 5700 XT to keep up with Nvidia’s fierce competition and equally aggressive RTX lineup this generation, unfortunately they’re all undermined by market conditions and price hikes across the board.
AMD has caught up to Nvidia in rasterization performance, and with RDNA3 expected to make another step forward, this is great news for gamers. Hopefully, by then things will have returned to normal which should see improved availability and trigger a price war between Nvidia, AMD, and who knows maybe even Intel, though that’s probably a bit too optimistic at this point.