5fdb66b9e3be45295.jpg e680

Since NVIDIA released maxwell graphics cards in 2015, AMD has basically said goodbye to high-end products. Later RX480, RX580, Vega64, Radeon VII, RX5700XT are unable to threaten NV’s top-end products, instead they can only compete with mid-range graphics cards. As a result, the GPU market became a monopoly for four years, and AMD’s market share fell to below 20 because of the lack of GPU power. The company’s main goal is to provide the best possible solution to the problem. So gamers are excited: that AMD is back, and the graphics card market is competitive again. So this generation of 30 series graphics cards in order to advance to suppress AMD head directly to the pricing compared to 20 series cheap a large cut.
But helplessly, the two are out of stock. Although I have the first to buy 3080, but 6800XT if lower power consumption is more suitable for my ITX box, and 3080 drop drive problem is also quite annoying. So let’s take a look at the RX6800XT and 3080 which is better?

5fdb66b9e3be45295 3.jpg e680 3

5fdb66ba5f7cd3774 3.jpg e680 3

5fdb66bacd7957554 3.jpg e680 3

5fdb66bb349d38755 3.jpg e680 3

The GPU specifications are as follows: the RX6800XT has 72 CUs, 4608 SPs, and each CU is paired with a Raytracing unit. The memory is 256bit 16GB GDDR6, although not GDDR6X, but the core has 128MB of internal infinite cache, the theory with GDDR6 can not lose GDDR6X. manufacturing process for TSMC 7nm, external power supply for 8 + 8pin.

360截图 1304677734

 

Test platform for self-use 3900X platform, because there is no 5900X, so the test results do not include open SAM. because the 3080 is a non-public version of the graphics card, so the performance may be a little stronger than the regular 3080.
CPU: AMD R9 3900X@4.2GHz内存: Asgard 64GB DDR4 3600C18 motherboard: MSI B550 GAMING EDGE WIFI graphics card: AMD RX6800XT public version graphics card / RTX3080 non-public version graphics card. SSD: Seagate FireCuda520 1TB + IntelS3610 1.6TB+Hynix 1.92T MLC Graphics driver:20.12.1, Geforce Driver 460.79
First is the theoretical performance test. In the focus on DX11 FSU test, 6800XT lead 3080, and the lead is not small, GPU scores were 12475 and 10642. but FSU strong is also considered an old tradition of A card.

5fdb66bc918ea3212.png e680 3

In the DX12 test TimeSpy, 2K resolution TS, 3080 and 6800XT scores are basically the same, GPU scores are 17535 and 17326, also 1% difference.

5fdb66bd240b31916.png e680 3

But when it comes to the 4K test TSE, the 3080 successfully overtakes and leads the 6800XT by about 5%.

5fdb66bd886be3224.png e680 3

The above information is summarized. As you can see, in the case of DX11, the 6800XT is definitely stronger than the 3080, but DX12, whether 2K or 4K are slightly inferior to the 3080.

5fdb66be4e1ba1612.png e680 3

As for optical tracking, I don’t think we need to measure it, after all, ATI is only the first generation of Raytracing technology, and NVIDIA’s current generation Raytracing performance has improved significantly compared to the previous generation. 3080’s performance is almost twice that of 6800XT. The results were 47.64fps and 26.73fps respectively, a difference of 75%.

5fdb66be96fda8212.png e680 1

5fdb66bf4b8f9679.png e680

GPGPUs have their own strengths (General-purpose computing on graphics processing units), NVIDIA’s single-precision has always been strong, while AMD’s double-precision is even stronger. It’s the same on the 3080 and 6800XT. The 6800XT has unlimited cache, and the memory replication speed is a staggering 1.1TB/s. But even with AMD’s strong computational performance, it can still be a bit awkward when facing N cards. NVIDIA’s CUDA ecology has a high barrier, and AMD does not have a corresponding ecology on its side. And professional software support is not on a level, such as blender, which only supports CUDA. So currently AMD computing performance is strong application of the most places instead of mining, really embarrassing.

5fdb66bfbe84c334.png e680

The 6800XT is a big improvement over the previous generation 5700XT, with traditional gaming performance pulled directly from 2070 to 3080 levels, so you could say the two are back on the same starting line. Power consumption is not tested, but according to other media test data, than 3080FE to save power.

Finally, if you need to use CUDA, NVIDIA is your only choice. At the moment A card also has a very suitable GUI program, with the Vulkan API, the speed is not slower than with cudnn’s caffe version.

作者 frank

发表评论