ASUIS Radeon RX 6750 XT ROG Gaming OC12GB
ASUS is back with the STRIX OC edition; meet the factory tweaked Radeon RX 6700 XT with dual BIOS and an amazing cooler. In Silent BIOS mode, this card is completely silent. Will having 12GB of GPU memory be sufficient?T he year 2022 is roughly halfway and AMD now revamps its RDNA 2 graphics card portfolio with a new family of graphics cards. All are rehashed and thus reconfigured products and that means this is not a new generation of cards, but rather an incremental improvement based on the same RDNA 2 architecture. The main difference here is the usage of quicker memory. All three graphics cards include up to 18Gbps GDDR6 memory chips, up from 16Gbps on non-50 versions. Some AIBs will configure the 6650 XT at 17.5 Gbps though with OC models configured at 18 GBps. All graphics cards also use a comparable GPU seen from their predecessor but will arrive with increased memory and boost/game clock rates. As such the AMD Radeon RX 6950 XT will sit above the Radeon RX 6900 XT, and the Radeon RX 6750 XT will sit above the Radeon RX 6700 XT. of course the Radeon RX 6650 XT will sit above the Radeon RX 6600 XT. Consider this to be AMD’s own ‘SUPER’; however, the shader and RT count remain identical without any big advancements. The lineup is intended to provide performance enhancements ranging from 5-to-10%, and memory will play a significant role in meeting these performance objectives. Once again, the new lineup would be as follows:
-
Radeon RX 6950 XT: 5120 SPs, 16GB GDDR6 (18Gbps), 256-bit bus, 335W TBP
-
Radeon RX 6900 XT: 5120 SPs, 16GB GDDR6 (16Gbps), 256-bit bus, 300W TBP
-
Radeon RX 6750 XT: 2560 SPs, 12GB GDDR6 (18Gbps), 192-bit bus, 250 TBP
-
Radeon RX 6700 XT: 2560 SPs, 12GB GDDR6 (16Gbps), 192-bit bus, 230W TBP
-
Radeon RX 6650 XT: 2048 SPs, 8GB GDDR6 (18Gbps), 128-bit bus, 180 TBP
-
Radeon RX 6600 XT: 2048 SPs, 8GB GDDR6 (16Gbps), 128-bit bus, 160W TBP
Sapphire Radeon RX 6750 XT NITRO+ Series
Introducing the ASUS ROG Gaming OC. The premium graphics card is equipped with 12GB of 192-bit, 18-Gbps GDDR6 memory. This card features triple-fan cooling and a dual-bios design with performance and silent modes; once the GPU warms up, the three fans begin to spin and cool, until that moment, it’s passive. The modified RX 6750 XT graphics card has an outspoken black design with a backplate. In the I/O area, the card includes three DisplayPort 1.4 ports, and one HDMI 2.1. Clock frequencies are listed as being a 2623MHz Boost. However, the Boost frequency isn’t a fixed setting anymore these days and can even vary a little per card. TGP allowance at performance mode is 230W. But as stated depending on the power allowance and even ASIC quality it can have different values. The 12GB GDDR6 memory clocks in at 2250 MHz (18 Gbps effective). We tested the factory-tweaked OC variant, which still overclocks fairly well and makes this device somewhat quicker than the standard model’s founder edition specifications. There is much to discuss; have a look at what is being reviewed today. After two pages of internal photographs, the review begins on the following page. The most bitter pill to swallow is the starting price, $549 USD. We have a lot to talk about, have a peek at what is reviewed today. On the next two pages some in-house photos, after which we then head onwards into the review.
Product Photos
The Radeon RX 6750 XT will include 12 GB of GDDR6 memory, which implies it will be coupled to a 192-bit wide memory bus, which remains to be a constraint. AMD attempts to compensate by operating it at 18 Gbps (effective data rate) and adding an extra L3 cache on-die on the GPU. Both will assist you at Full HD and WQHD resolutions, but if the L3 cache is drained, mostly in GPU-bound Ultra HD, performance will suffer.
ASUS equips the graphics card with its Axial fan technology, and the cooler adheres to the new maximum contact base plate. As noted, this card is close to three slots wide, and as a ROG, there are additional fan ports towards the back. The card’s backplate is perforated and adopts a familiar black and silver color scheme while maintaining the customary four display outputs; a single HDMI 2.1 and three DisplayPort 1.4.
Power is supplied via an 8+8-pin layout neatly recessed into the backplate, and considering the enormous magnitude of the cooler, a factory overclock is included as standard. Don’t expect to make a lot of money. Sapphire officially lists a boost frequency of 2,623MHz (2,554MHz game clock). Frequencies oscillate at ~2,550MHz in real-world use, while the large 12GB GDDR6 frame buffer, connected via a 192-bit bus, operates at 18Gbps.
Do you prefer to keep your GPU operating within its capabilities? ASUS’s alternative BIOS, which can be accessed through the hardware toggle on the top side, reduces the boost clock a small notch whilst remaining silent, still the card hardly makes noise in perf mode, so why even bother?
Product Photos
The 6750 XT series will not (yet) replace the 6700 XT; the Radeon RX 6750 XT features a comparable 2560 stream/Shader processor count to the 5700 XT and 6700 XT. The L3 cache and RDNA2 GPU architecture, on the other hand, will result in a speedier product. This card has a maximum boost speed of 2624 (reference 2600 MHz). The BIOS supports fan stop in idle mode, which means the card is passive (no fans spinning) while not in use, bringing you the benefit of a silent device.
In terms of the card itself, it employs the same shroud design as the RX 6700 XT STRIX. That means it’s largely black plastic, with a few silver accents for contrast. Although the silver portions may not appeal to everyone if you have a stealthy black-out build, I think it’s a great-looking card. Due to the cooling design, please be aware that the product emits a significant volume of heated air; you’ll need a PC with a well-ventilated intake (front) and exhaust (rear/top). This BIOS switch will provide you with Perf and Silent modes; game performance-wise, the two are identical as the only difference is the FAN RPM, the temperature difference is negligible, but the difference in acoustics is enormous, so toggle the card to Silent mode and be done with it, as you will not even hear the card in this configuration. In the pages that follow, we’ll demonstrate and quantify everything.
Two 8-pin and connectors are required for the video card. ASUS values the TGP power for this video card at 230W, other then that the card offers you three DisplayPorts and one HDMI for display outputs.
The card weighs little over 1.5 kilograms and is 2.9 slots broad; its length is about 32 centimeters. If you do not purchase the OC model, the card comes reference-clocked; otherwise, the features are same. These are the factory-default clock frequencies.
GPU architecture and specifications
You may expect several graphics cards within the new 6000 lineups. AMD strickly is focussing on high-end to enthusiast-class with this release. But we have no doubt that RDNA2 based architecture of course will find its way to the lower spectrum of the product line over time as well. For the initial Big Navi release, we’ll see the two Radeon RX 6800 (XT) graphics cards and a Radeon RX 6900 XT; they all are based on the same GPU holding a whopping 26.8 billion transistors. Inevitably, of course, there will be a Radeon RX 6700 series released as well. AMD claims a 50% perf per watt improvement in performance, double that of last-gen performance. Let’s first have a look and overview of the specifications per product. As you can see AMD has been able to nearly double up the shader/stream processor count for a full enabled GPU. However take a look at the difference in transistor count, yes that’s a big chip alright, reasonably explained due to the new infinity cache, as well as additional RT cores. As you can observe, a fully enabled Big Navi GPU holds 80 CUs (Compute units); each CU holds 64 shading/stream cores. So when multiplied you’ll end up at 80×64= 5120 shader cores. Each CU has 1 RT (Raytracing) core thus 80 cores for the 6900XT, 72 for the 6800 XT, and 60 for the 6800. Historically AMD has had 4 texture units per CU, so that’s 320 units for the 6900 XT, 288 for 6800 XT, and 240 units for the 6800. The ROP count is 128 units for 6800 XT and 6900 XT, 96 for the 6800.
Infinity Cache
One of the biggest changes from previous GPU architecture is Infinity Cache (IC). Why IC? well, the choice of GDDR6 memory is a far cheaper approach than what NVIDIA is doing with GDDRX. This will help in the bill of materials for a graphics card. However, gDDR6 tied to 16GB AMD faces the challenge that the memory bus is a bit limited at 256-bits. 512-bit is complicated signal and wires wise so they figured that adding a level of cache memory will take the load off the memory bus. and that helps tremendously in performance per watt really, but also greatly helps raytracing. So very simply put, IC is cache memory, and that cache memory (128MB) is placed directly into the chip itself (on-die). This also is one of the reasons that the Navi 21 GPU is considerably larger than Navi 10. Normally a GPU has a few megabytes of cache memory (L1 and L2). Then there’s a huge gap in between the many gigabytes of VRAM that the frame buffer has. This gap is bridged with Infinity Cache. Arbitrarily speaking you could look at IC as an L3 cache that is more capable to provide the GPU with sufficient and faster data in a faster manner and reduces frame buffer utilization. AMD injected 128MB Infinity Cache used in combination with a 256-bit memory bus provides more than twice the bandwidth of a 384-bit memory bus. So a lot of the transistor budget was used for this feature. The cache does not just help in performance, as the GPU now takes advantage of lower energy consumption because there is less utilization on the memory controllers. 128MB Infinity Cache is implemented and used on all the RX 6900 XT and 6800 XT and RX 6800 graphics cards. In our findings, this big ‘L3’ cache helps out greatly.
Raytracing
So what is raytracing really? Well, with raytracing, you basically are mimicking the behavior, looks, and feel of a real-life environment in a computer-generated 3D scene. Wood looks like wood, however, the leaking resin will shine and refract its environment and lighting accurately. Glass and waves of water get refracted as glass based on the surroundings and lights/rays. Can true 100% raytracing be applied in games? Short-term answer, no, partially. As you have just read and hopefully remembered, Microsoft has released an extension to DirectX; DirectX Raytracing (DXR). AMD now has dedicated hardware built into their GPUs to accelerate certain raytracing features. You have seen these in current games mostly as Shadow optimization, but most commonly used are reflections (water, puddles, windows, tiles, and so on). Rasterization has been the default renderer for a long time and you can add to that a layer of raytracing. Combining rasterization and raytracing we like to call Hybrid raytracing. and offers the best of both worlds. But make no mistake, the RT cores inside Big Navi can do full scene raytracing, however, would never be fast enough for real-time rendering.
PCI Express Gen 4.0
New on the spec list is support for PCI-express 4.0. Competitor AMD had been making big bets with the original NAVI products and already moved to PCIe Gen 4.0 as well as their chipsets and processors. But what does PCIe Gen 4.0 bring to the table? Well, simply put, more bandwidth for data to pass through.
On the 4.0 interface, you’ll be hard-pressed to run out of bandwidth as each lane gets doubled up in that bandwidth, per lane. Of course, there has been a recent PCI-Express Gen 5.0 announcement as well, for ease of mind I already inserted it into the table.
Hardware Installation
The installation of any graphics card is straightforward these days. Once the card is seated into the PC make sure you hook up the monitor and of course any external power connectors like 6/8-pin or the new 12-pin PEG power connectors. Preferably get yourself a power supply that has these PCIe PEG connectors natively. Purchase a quality power supply, calculate/estimate your peak power consumption for the entire PC, and double that number for the power supply as your PSU is most efficient at half the load value. So, if during gaming you consume 300W Watts on average for the entire PC, we’d recommend a 600 Watt power supply as a generic rule.
Once done, we boot into Windows, install the latest drivers and after a reboot, all should be working. No further configuration is required or needed unless you like to tweak the settings, for which you can open the control panel.
Why a power interposer? Well, NVIDIA normally reports both chip and board power via API, AMD reports what appears to be a value in-between chip-only and full board power, as verified by third-party interposer testing methodologies including PCAT and others. This means that you can’t rely on API-reporting software such as GPU-Z or HWiNFO64 for accurate chip or board measurements to compare with. Our setup polls every 100ms allowing us to measure peak and typical energy consumption objectively.
Here is our power supply recommendation:
-
Radeon RX 6500 XT – On your average system we recommend a 450-500 Watt power supply unit.
-
Radeon RX 6650 XT – On your average system we recommend a 550Watt power supply unit.
-
Radeon RX 6750 XT – On your average system we recommend a 650 Watt power supply unit.
-
Radeon RX 6800 – On your average system we recommend a 600 Watt power supply unit.
-
Radeon RX 6800 XT – On your average system we recommend a 650-700 Watt power supply unit.
-
Radeon RX 6950 XT – On your average system we recommend a 750 Watt power supply unit.
If you are going to overclock your GPU or processor, then we do recommend you purchase something with some more stamina. There are many good PSUs out there, please do have a look at our many PSU reviews as we have a lot of recommended PSUs for you to check out in there. Let’s move to the next page where we’ll look into GPU heat levels and noise levels coming from this graphics card.
Graphics card temperatures
So here we’ll have a look at GPU temperatures. First up will be IDLE (desktop) temperatures as reported through the software on the thermal sensors of the GPU. Overall anything below 50 Degrees C is considered okay, anything below 40 Degrees C is admirable. We add some other cards at random that we have recently tested in the chart. But what happens when we are gaming? We fire off an intense game-like application at the graphics card and measure the highest temperature of the GPU.
So with the card fully stressed we kept monitoring temperatures and noted down the GPU temperature as reported by the thermal sensor. These tests have been performed with a 20~21 Degrees C room temperature, this is a peak temperature based on a GPU stress loop.
Long Duration Stress Temperature and GPU Throttling clock
Before we start benchmarking, we always heat up the card. During the looped warm-up sequence of at least 15 minutes GPU gaming load, we observe what dynamic clock the GPU will throttle at; ~2650 MHz threshold once the GPU has warmed up. We do this prior to all our reviews, as a cold card will boost a notch higher and could influence the test results.
Thermal Imaging Temperature Measurements
To visualize heat coming from the product or component being tested more accurately, we make use of thermal imaging hardware, also what you know as a FLIR camera. FLIR is a brand, and short for Forward-looking Infrared. With a thermal imaging camera, a special lens focuses the infrared light emitted by all of the objects in view. This focused light is scanned by a phased array of infrared detector elements. The detector elements create a very detailed temperature pattern called a thermogram. It only takes about one-thirtieth of a second for the detector array to obtain the temperature information to make the thermogram. This information is obtained from several thousand points in the field of view of the detector array. The thermogram created by the detector elements is translated into electric impulses. The impulses are sent to a signal-processing unit, a circuit board with a dedicated chip that translates the information from the elements into data for the display. The signal-processing unit sends the information to the display, where it appears as various colors depending on the intensity of the infrared emission. The combination of all the impulses from all of the elements creates the image. You can see hotspots on the card or PCB indicating temperature bleeds, as well as how heat is distributed throughout a product.
Acoustic Levels
When graphics cards produce a lot of heat, usually that heat needs to be transported away from the hot core as fast as possible. Often you’ll see massive active fan solutions that can indeed get rid of the heat, yet all the fans these days make the PC, a noisy son of a gun. Do remember that the test we do is extremely subjective. We bought a certified dBA meter and will start measuring how many dBA originate from the PC. Why is this subjective you ask? Well, there is always noise in the background, from the streets, from the HDD, PSU fan, etc, so this is by a mile or two, an imprecise measurement. You could only achieve objective measurement in a sound test chamber. The human hearing system has different sensitivities at different frequencies. This means that the perception of noise is not at all equal at every frequency. Noise with significant measured levels (in dB) at high or low frequencies will not be as annoying as it would be when its energy is concentrated in the middle frequencies. In other words, the measured noise levels in dB will not reflect the actual human perception of the loudness of the noise. That’s why we measure the dBA level. A specific circuit is added to the sound level meter to correct its reading in regard to this concept. This reading is the noise level in dBA. The letter A is added to indicate the correction that was made in the measurement. Frequencies below 1 kHz and above 6 kHz are attenuated, whereas frequencies between 1 kHz and 6 kHz are amplified by the A weighting.
Acoustic Levels at 40cm
The graphics card turns down the fans in passive mode (fan-stop), thus in desktop idle situations, the card is passive. Under full gaming load, we’d hit ~37 DBa.
Test Environment & Equipment
Here is where we begin the benchmark portion of this article, but first let me show you our test system plus the software we used.
Mainboard
ASUS X570 Crosshair VIII HERO – Review
Processor
Ryzen 9 5950X (16x/32t) @ defaults – Review
Graphics Cards
-
Radeon 6750 XT (ASUS ROG STRIX Gaming OC)
-
rBAr active
Memory
32 GB (4x 8GB) DDR4 3600 MHz
Power Supply Unit
850 Watts Platinum Certified Corsair RM850X – Review
Monitor
4K UHD Monitor at resolutions 1920×1080, 2560×1440 and 3840×2160
OS related software
Windows 11 64-bit
DirectX 9/10/11/12 End-User Runtime (Download)
AMD Radeon Software Driver 22.5 Series (Download)
NVIDIA GeForce Driver / 512 series(Download)
A Word About “FPS”
What are we looking for in gaming, performance-wise? First off, obviously, Guru3D tends to think that all games should be played at the best image quality (IQ) possible. There’s a dilemma though, IQ often interferes with the performance of a graphics card. We measure this in FPS, the number of frames a graphics card can render per second, the higher it is the more fluently your game will display itself.
A game’s frames per second (FPS) is a measured average of a series of tests. That test is often a time demo, a recorded part of the game which is a 1:1 representation of the actual game and its gameplay experience. After forcing the same image quality settings; this time-demo is then used for all graphics cards so that the actual measuring is as objective as can be.
-
So if a graphics card barely manages less than 30 FPS, then the game is not very playable, we want to avoid that at all cost.
-
With 30 FPS up-to roughly 40 FPS you’ll be very able to play the game with perhaps a tiny stutter at certain graphically intensive parts. Overall a very enjoyable experience. Match the best possible resolution to this result and you’ll have the best possible rendering quality versus resolution, hey you want both of them to be as high as possible.
-
When a graphics card is doing 60 FPS on average or higher then you can rest assured that the game will likely play extremely smoothly at every point in the game, turn on every possible in-game IQ setting.
-
Over 100 FPS? You either have a MONSTER graphics card or a very old game.
Monitor Setup
Before playing games, setting up your monitor’s contrast & brightness levels is a very important thing to do. I realized recently that a lot of you guys have set up your monitor improperly. How do we know this? Because we receive a couple of emails every now and then telling us that a reader can’t distinguish between the benchmark charts (colors) in our reviews. We realized, if that happens, your monitor is not properly set up.
What Are You Looking For?
-
Top bar – This simple test pattern is evenly spaced from 0 to 255 brightness levels, with no profile embedded. If your monitor is correctly set up, you should be able to distinguish each step, and each step should be visually distinct from its neighbors by the same amount. Also, the dark-end step differences should be about the same as the light-end step differences. Finally, the first step should be completely black.
-
The three lower blocks – The far left box is a black box within the middle a little box a tint lighter than black. The middle box is a lined square with a central grey square. The far-right white box has a smaller “grey” box that should barely be visible.
You should be able to distinguish all small differences, only then is your monitor set up properly contrast and saturation wise.
Hitman III (API: DirectX 12)
Hitman 3 is the dramatic climax to the world of assassination trilogy, transporting players on a globetrotting trip over enormous sandbox environments. Agent 47 returns as a ruthless professional, pursuing the most significant contracts of his career with the assistance of his agency supervisor Diana Burnwood. 47 teams up with his long-lost friend Lucas grey; their ultimate goal is to destroy providence’s partners; however, as their hunt escalates, they are forced to adapt. When the dust settles, 47 and the world he inhabits will never be the same. — interactive io. We use a run with Ultra picture quality settings for this game, which is the highest quality level available.
Assassins Creed: Valhalla (API: DirectX 12)
Set in 873 AD, the game recounts a fictional story of the Viking invasion of Britain. The player controls Eivor, a Viking raider who becomes embroiled in the conflict between the Brotherhood of Assassins and the Templar Order. We test the game in ultra-quality settings.
Shadow Of The Tomb Raider (API: DirectX 12)
Set shortly after the events of Rise of the Tomb Raider, its story follows Lara Croft as she ventures through Meso-America and South America to the legendary city Paititi, battling the paramilitary organization Trinity and racing to stop a Mayan apocalypse she has unleashed. Lara must traverse the environment and combat enemies with firearms and stealth as she explores semi-open hubs. In these hubs she can raid challenge tombs to unlock new rewards, complete side missions, and scavenge for resources which can be used to craft useful materials.
This particular test has the following enabled:
-
DX12
-
Highest Quality mode
-
TAA enabled
-
HBAO+ enabled
-
Pure Hair Normal (on)
Our scores are average frame rates so you need to keep a margin in mind for lower FPS at all times. As such, we say 40 FPS in your preferred monitor resolution for this game should be your minimum, while 60 FPS (frames per second) can be considered optimal.
Far Cry 6 (API: DirectX 12)
We use a run with the best image quality settings for this game, the highest possible quality mode (Ultra). Games typically should be able to run in the 60 FPS range combined with your monitor resolution. From there, you can enable/disable things if you need more performance or demand even better game rendering quality. There’s a lot of features you can configure, we recommend you to stick to the best quality settings and leave it at that. There are numerous possibilities. The game includes FPS limiters between in addition, to that there are different graphic presets like low, medium, high, and ultra, and an integrated benchmark. Optionally you can enable an HD Quality texture pack. Since we test with 8GB graphics cards predominantly these days, we install and enable it as default.
Codemasters Formula 1 2021 (API: DirectX 12)
F1 2021 is a racing video game developed and published by Codemasters. The game is based on the 2018 Formula One season and includes all of the twenty circuits, twenty drivers and ten teams competing in the season. F1 2021 is the twelth installment in the Formula One video games franchise developed by Codemasters. There’s a lot of features you can configure, we use Ultra High quality. We apply x16 AF manually. The in-game anti-aliasing mode is TAA (and not TAA checkerbox). We do enable it, but on a high-res screen, you could do without AA fine and get some extra performance as well.
The Witcher III: Wild Hunt (API: DirectX 11)
Wild Hunt has improved on several aspects from past games. Combat revolves around an action role-playing game system combined with the use of magic. The fighting system has been completely revamped. Wild Hunt introduces some new mechanics, such as witcher-sense, combat on horseback and, at sea, swimming underwater and using a crossbow. Additionally, Geralt can now jump, climb, and vault over smaller obstacles. Our measurements are taken in game. Our settings are Ultra quality with AA enabled. Please find an overview of the exact settings here. Hairworks is DISABLED to objectively compare between AMD and Nvidia cards. Our test run has enabled:
-
DX11
-
Ultra mode
-
AA enabled
-
16x AF enabled
-
SSAO enabled
-
Nvidia hairworks OFF
-
Other settings ON
Watch Dogs: Legion (API: DirectX 12)
The London Branch of DedSec, led by Sabine Brandt and her newly crafted AI Bagley, detect armed intruders planting explosives in the Houses of Parliament. Sabine assigns DedSec operative Dalton Wolfe to defuse the bombs. Although he manages to have some success, he quickly learns that the intruders are from a rogue hacker group called “Zero Day”, who seek to prevent his interference. Learning that DedSec has been attacked, causing Sabine to go underground and shutting down Bagley, Dalton attempt to complete his mission. We test with the internal benchmark at ultra quality settings.
We enable Direct X Raytracing reflections
DX12: Cyberpunk 2077 (API: DirectX 12)
Cyberpunk 2077 was developed using the REDengine 4 by a team of around 500, exceeding the number that worked on the studio’s previous game, The Witcher 3: Wild Hunt (2015). We test the game at the China Town level, heavy on the GPU with lots of movement. We test in ultra-quality modus with Raytacing shadows and reflections enabled.
Forza Horizon 5 PC (2021)
Forza Horizon 5 PC will be included to our graphics card benchmark suite as a Raytracing title, and as such, we thought it would be fun to do a quick performance evaluation of the game using raytraced results that can be applied to compatible GeForce and Radeon graphics cards. Using the most recent DXR compatible AMD Radeon and NVIDIA GeForce graphics cards, we’ll see how well the game performs on the PC platform in terms of graphics card performance. Our main test will be based on the extreme quality setting.
DX11: Unigine: Superposition
Unigine, known for their 3D engine and benchmarks like Heaven and Valley has released a new benchmark, Superposition. The software allows you to test their game engine and outputs scores and frame rates for you to compare to. The benchmark will run on the Unigine 2 engine. The test runs through a classroom that a professor uses for quantum mechanics. There’s an explosion, but the professor is nowhere to be found. You have the only chance to cast some light upon this incident by going deeply into the matter of quantum theory: thorough visual inspection of the professor’s records and instruments will help to lift the veil on the mystery. Superposition at 4K or higher is painful on VRAM. We test in the extreme mode which uses just over 3 GB memory with modern age graphics cards.
DX11: 3DMark FireStrike (Ultra)
3DMark includes everything you need to benchmark your hardware. With three tests you can bench everything from smartphones and tablets to notebooks and home PCs, to the latest high-end, multi-GPU gaming desktops. And it’s not just for Windows. With 3DMark you can compare your scores with Android and iOS devices too. Here (below) are 3DMark FireStrike results. FireStrike is the new showcase DirectX 11 benchmark designed for high-performance gaming PCs.
DX12: 3DMark Time Spy
Time Spy is a new DirectX 12 benchmark test, available as DLC right now on Steam and to all Windows editions of 3DMark. With its pure DirectX 12 engine, built from the ground up to support new features like asynchronous compute, explicit multi-adapter, and multi-threading, Time Spy is the ideal benchmark for testing the DirectX 12 performance of the latest graphics cards. Developed with input from AMD, Intel, Microsoft, NVIDIA, and the other members of the Futuremark Benchmark Development Program, Time Spy shows the exciting potential of low-level, low-overhead APIs like DirectX 12.
DirectX Hybrid Raytracing
We have a handful of applications available to measure, to a certain extent, DirectX ray-tracing. Below, the results for the Port Royale DX-Raytracing test. The results below are reference card results. You can argue the test itself and how good it looks, but 3DMark offers a valid hybrid ray-tracing test that applies shadows and reflections that are raytraced. And as such, we can observe generational performance increases. Please note that the result set below is based on the reference review and serves as an indication of the performance of cards in the same class.
DirectX Raytracing (full path)
Real-time ray tracing is hefty on the GPU. The latest graphics cards have dedicated hardware that’s optimized for ray-tracing operations. Despite the advances in GPU performance, the demands are still too high for a game to rely on ray tracing alone. That’s why games use ray tracing to complement traditional rendering techniques. The 3DMark DirectX Raytracing feature test is designed to make ray-tracing performance the limiting factor. Instead of relying on traditional rendering, the whole scene is ray-traced and drawn in one pass. The test result depends entirely on ray-tracing performance, which means you can measure and compare the performance of dedicated ray-tracing hardware in the latest graphics cards.
Hybrid Raytracing is what you see currently in games. Here the game is using the traditional shading render and applies Raytracing effects like shadows and reflections. So what we measure below is pure raytracing.
API Performance: Vulkan vs DirectX12 Raytracing
With BasemarkGPU we focus on the Windows 11 platform and see what that brings us in performance as we can measure OpenGL, DirectX 12 and Vulkan, which is a nice option to look into from an API comparing perspective. We use the reference result set here as an indication of API performance.
GPGPU: – Blender and Indogo
Blender offers a wide variety of options and APIs, depending on your graphics card. We fire off a scene where we render a Classroom – we only allow the GPU to render in the benchmark application. There are API related challenges to address with Blender though:
-
AMD Radeon cards support OpenCL solely
-
NVIDIA GeForce cards up to Pascal support CUDA – but not OpenCL or Optix
-
NVIDIA GeForce RTX cards based on Turing can be assigned CUDA or OptiX – but not OpenCL
In this chart above, we have enabled Optix where it could be enabled, which is the RTX series. Where OptiX is not possible we run CUDA. OpenCL is not available for GeForce cards. The Radeon graphics cards have just one option, OpenCL. Ergo we choose the fastest API available for the graphics card we test.
Note: Due to a change in the benchmark, we are starting from scratch again with blender results.
Formula 1 2021 and Watch Dogs: Legion (RSR)
In addition to Raytracing results, we now need to consider the performance gains that can be obtained through the use of technologies such as DLSS and RSR. It is not practical anymore to evaluate all games in this manner due to the additional data sets and time necessary. For example, if we need to test ten games at three different resolutions, we will need to run approximately 30 benchmarks. Raytracing each game adds another 30 test runs, and then further result from sets with DLSS, for example, add another 30 test runs on top of that. As a result of the use of technologies such as Raytracing, RSR/FSR, and DLSS over the past two years, the data sets have increased from 30 test runs to nearly 100 test runs. As a result, we’ve chosen a number of rasterized DX11/12 games, a couple of pure raytraced games, and then included some FSR and/or DLSS results on a separate page for comparison.
To be able to understand the charts:
-
Our native resolution is UHD, so that’s always the target. We can now select Full HD or WQHD to render at that resolution and scale upwards to Ultra HD.
Frametime and latency performance
The charts below will show you graphics anomalies like stutters and glitches in a plotted chart. Frame time and pacing measurements.
-
FPS mostly measures performance, the number of frames rendered per passing second.
-
Frametime AKA Frame Experience recordings mostly measure and expose anomalies – here we look at how long it takes to render one frame. Measure that chronologically and you can see anomalies like peaks and dips in a plotted chart, indicating something could be off.
We have a detailed article (read here) on the methodology behind it all. Basically the time it takes to render one frame can be monitored and tagged with a number, this is latency. One frame can take say 17 ms. Higher latency can indicate a slow framerate, and weird latency spikes indicate a stutter, jitter, twitches; basically, anomalies that are visible on your monitor. What these measurements show are anomalies like small glitches and stutters that you can sometimes (and please do read that well, sometimes) see on screen. Below I’d like to run through a couple of titles with you. Bear in mind that Average FPS often matters more than frametime measurements.
Please understand that a lower frame time is a higher FPS, so for these charts, lower = better. Huge spikes would be stutters, thick lines would be bad frame pacing, and the graduate streamlining is framerate variation.
As you might have observed, we’re experimenting a bit with our charts and methodology. Below the games at 1920×1080 (Full HD), with image quality settings as used throughout this review. The results overall are fine.
Above: Hitman III
Above: Watch Dogs Legion
Above: Shadow of the Tomb Raider
Above: Assassins Creed: Valhalla
Above: Far Cry
Above: Formula 1
Overclocking the graphics card
For most graphics cards you can apply a simple series of tricks to boost the overall performance a little. Typically you can tweak the core clock frequencies and voltages. By increasing the frequency of the video card’s memory and GPU, we can make the video card increase its calculation clock cycles per second. It sounds hard, but it can really be done in less than a few minutes. I always tend to recommend to novice users and beginners, to not increase the frequency any higher than 5% on the core and memory clock. Example: If your GPU runs at 1500 MHz then I suggest that you don’t increase the frequency any higher than 25 MHz increments.
More advanced users push the frequency often way higher. Usually, when your 3D graphics start to show artifacts such as white dots (“snow”), you should back down 25 MHz and leave it at that. Usually, when you are overclocking too hard, it’ll start to show artifacts, empty polygons or it will even freeze. Carefully find that limit and then back down at least 25 MHz from the moment you notice an artifact. Look carefully and observe well. I really wouldn’t know why you need to overclock today’s tested card anyway, but we’ll still show it. All in all… you always overclock at your own risk.
Normally, most cards will all tweak to roughly the same levels due to all kinds of hardware protection. We applied the following settings :
-
Core Voltage – Max
-
Power Limiter: +15% (Max)
-
Clock Max 2900~2925 MHz
-
Mem clock max 2312 (=18500 Mbps effective)
-
FAN RPM default
The mem clock is set at 2312 MHz (maxes out) / (18.5 Gbps effective data rate). That’s tweakable, you’ll quite easily hit 18.5 Gbps. Once tweaked the power limiter kicks in fast, once it does downclock the card back mostly to ~2900MHz when it gets GPU bound in a game.
The results show respective default clocked results plotted in percentages. To the far right where you can see “Aver Difference %”, this is the result of the four games tested and averaged out.
The GPU Shoot Out
The reader base has requested that we compare specific models of all brands in a chart. Normally we only compare a product for review against the reference card. In this new chart, you can compare all cards tested against each other. The chart is new, and we use 3DMark Time Spy. The new DX12 software suite has not been around that long hence all cards prior to the release date of this software are missing. We display the GPU score and not the combined score, that way you will have a more objective view of what the GPU can actually do. Keep in mind though that driver changes over time can always influence the score a tiny bit here and there. Over time this shoot-out chart will build up (and should become a rather lengthy one).
Final words
That STRIX is a seductive little scoundrel, and it has the looks and silence to match. Overall, the Radeon RX 6750 XT is a memorable card for gaming in the WQHD (2560×1440) resolution area, although it is somewhat pricey. In the previous two years, mainstream serials have experienced extraordinary price increases. Out of the box, this performs better than a reference video card, however since AMD is not distributing these for review, it’ll be a bit of a wild guess. To support the powerful cooling and three fan architecture, the mainstream video card is physically larger in every dimension. The backplate is adequately vented and provides cooling via thermal pads. It is possible that the standard RX 6750 XT performs similarly to the RTX 3070 Ti and sometimes 3080, but solely in terms of shading performance. Raytracing performance is significantly slower than that of the rivals. Also, this is mainly in the lower resolutions thanks to caching techniques applied. While the Infinity cache works well in most cases, it was designed as a workaround for a flaw in the memory type chosen (GDDR6 versus GDDR6X), and the current AMD GPUs are memory bandwidth constrained, even when using GDDR6 at 18 Gbps, but especially when using a 192-bit wide memory bus. We question AMD’s attempt to justify a price of 549 USD though.
Performance spread reference
We’ve been fairly busy over the last week or so, with eight Radeon RX 6×50 XT reviews lined up, technically more as we’re still waiting on other brand samples as well. Thus, the variances are not enormous from top to bottom; have a look:
So the chart above is arbitrary in the sense that results can deffer a single % here and there, less so in fillrate limited situation, more so in GPU bound situations. But from reference to the fastest AIB cards, you’re looking at 3 to 4% differentials (depending on game and resolution).
Cooling & acoustic
ASUS performs great. You’ve seen the FLIR photographs; there is barely any illumination. The performance BIOS mode cools this card to 56 degrees Celsius, while the silent BIOS mode cools it to give or take 60 degrees Celsius. Both modes provide the same performance. Temperatures will differ slightly based on chassis and airflow. Regarding acoustics, there is only one mode to consider, quiet BIOS mode, since it is, in fact, silent. Honestly, the performance option makes little sense if there is no more performance to gain.
Energy
Heat output and energy consumption are closely related to each other, as (graphics) processors and heat can be perceived as a 1:1 state; 250 Watts in energy consumption approaches close to 250 Watts in heat as output. This is the basis of TDP. AMD is listing the card at 250W, which is okay at best for a graphics card in the year 2022. We measure numbers slightly above the XT’s advertised values; we measure the entire power consumption of the card to close in at ~275 Watt, that’s total board power not TGP (fans and RGB can easily utilize 10~15 Watts). The card can peak to 300W (spikes).
Coil whine
Compared to the reference Radeon RX 6700 XT, the STRIX card exhibits far less coil squeal. It’s at a level you can hardly hear it. In a closed chassis, that noise would fade away in the background. However, with an open chassis, you can hear coil whine/squeal. Graphics cards all make this in some form, especially at higher framerates; this can be perceived.
Pricing
AMD has done a great job with NAVI22. However, the price of an entry-level to mainstream graphics card has risen to $549. The model listed and tested today under SKU code 90YV0HK1-M0NA00, will cost $649. Too pricy? Yes. The cards sit close to RTX 3070 series performance though, but only in shading performance. Raw Raytracing’s performance lags behind the competitors.
Tweaking
The RX 6750 XT enjoys having more memory bandwidth available to it. You can add it manually and get up to 18.5; however, results will vary depending on the board, brand, and even card due to cooling (GDDR6/GPU/VRM) and other factors. We could get this AMD Navi GPU to run at a very respectable 2950 MHz with a little GPU tweaking. And that’s without any anomalies or crashes of any kind. The dynamic clock frequency is now hovering in the ~2900 MHz range, depending on the load, game/app, and board assigned power. Even so, that’s quite a feat. As is always the case, all of your tweaking and increased energy consumption will only provide you with a maximum of ~5% improvement in performance (depending on your results and model graphics card). It must successfully complete four-game runs (in four different games) in 2560×1440 resolution in order to be considered stable enough to be listed here.
Conclusion
The STRIX is an exquisite, possibly even overengineered graphics card for the Radeon RX 6750 XT product family. It provides additional features (dual BIOS, RGB/Fan connectors, etc.), cooling, and a factory tweak (for the OC variant), but that pricing, huh? The GPU stays cool and silent while gaming thanks to IC-powered fan technologies that keep the fans running precisely. You will have a WQHD product with excellent performance at temperatures and noise levels that are perfectly acceptable. Aesthetically, the card is beautiful, albeit perhaps a bit large. As said, this is a beautiful Full HD and Quad HD card with limited Ultra HD capabilities. The raytracing performance of this generation’s RDNA2 GPUs is merely ordinary, while AMD is severely lacking in DLSS. All of these might be overlooked if the product was reasonably priced, but since the card will likely cost well over 600 USD/EUR, we cannot rationalize such a price point. This is not ASUS’s fault; AMD has priced this series excessively. Nonetheless, the design is exquisite, with all the necessary boxes checked. The STRIX cards are always well-executed, and ASUS has done an excellent job of addressing the demands of the majority of customers and giving a well-rounded solution as a consequence.