There are two very important levels of graphics performance in modern systems to consider – one is if the graphics system is sufficient for seamless use, and the second is such that it meets a substantial standard for gaming. On one side we use integrated graphics, which take advantage of a unified processor to simplify the system, and on the other we look to a range of options, such as smartphones, consoles, and discrete graphics options. Somewhere in there we have a middle ground – can an integrated option have enough thermal headroom and graphics power to worthwhile for gaming? This is the pitch of AMD’s Ryzen 4000 based APUs, which combine Zen 2 CPU cores with fast Vega 8 graphics. With a 65W headroom, it should surpass anything that mobile processors have to offer, but is it enough to replace the low-end discrete graphics market?

When a CPU meets GPU

AMD is the company of the accelerated processing unit, or APU. The company introduced the term in 2011 when it started combining its x86 CPU cores and some form of graphics accelerator into the same piece of silicon. This combined processor, built for the laptop and desktop market , was designed to remove the need for a completely separate graphics card in a system, simplifying the design and bringing down overall cost for anyone that simply needed a graphics output for simple tasks. At the time, these solutions were very much for the low-end market.

Combining a CPU and a GPU on the same piece of silicon has a variety of tradeoffs involved. The key benefit is reducing that bill of materials, but there are also advantages in the latency of communication between the CPU cores and the GPU acceleration as the data does not need to go off the chip. There can also be benefits in power control, with a system being able to manipulate how much power goes to each in a simpler way.

But there are a number of downsides. The total power consumption of the system now gets condensed into one thing, rather than split across two. This makes the one APU a central hotspot for cooling support. Also, adding in graphics will make the single CPU die size larger, making it more difficult to yield compared to two separate pieces of silicon. It can also be complex if both CPU and GPU have to be made on the same manufacturing process, depending on the initial design for those architectures. There is also the memory problem – graphics loves memory bandwidth, and CPU memory controllers are slow by comparison; while a GPU might love 300 GB/s from some GDDR memory, a CPU with two channels of DDR4-3200 will only have 51.2 GB/s. Also, that memory bank needs to be shared between CPU and GPU, making it all the more complex.

For ultra-mobile laptops, the tradeoff in having a single combined APU is worth it, as it also means there can be a bigger battery and reducing the number of items inside the shell helps with aesthetics and thermals. Also, ultra-mobile laptop users are not often demanding super graphics performance for 4K gaming, and so something that provides ‘enough’ performance, at a suitably low power, is often preferred.

The higher the performance that a combined CPU GPU piece of silicon, this arguably also reduces the market for graphics by taking away options at the low-end. If a simple APU can perform graphics duties of a $100 graphics card, then there is arguably no need for $100 graphics cards any more. We can compare what each GPU vendor has launched in the last few years for the ‘entry gaming market’ to confirm that the market below $100 is now for APUs and simple ‘must-have-a-screen’ cards for the pre-built market:

Perhaps surprisingly over the last couple of years, despite at one point AMD promoting its RX 480 card as a possible $200 gaming card, both companies are veering heavily towards the high-end gaming market, leaving the budget range for OEMs, and arguably also the mid-range as well. Both AMD and NVIDIA with the latest releases start at a relatively hefty $399 MSRP, which is a world away from the $200 suggested low-end price for the AMD RX 480 at launch. Part of this is driven by new gaming features like Ray Tracing, the fact that leading edge graphics tend to launch at the high-end first, as that is where the biggest return on investment is, and with the rise of high resolution gaming, 8 GB of video memory seems to be the new minimum, if not more, which drives up the total cost.

So if APUs are there to bridge the gap, then we’re at a bit of a quandary. Intel has leading edge integrated graphics solutions with its latest Xe-LP Tiger Lake processors, however these are for mobile use only. In that market segment, a good performing chip has a better financial return than the same silicon used in a desktop socketable processor, and with Intel looking to drive mobile volume it is putting all that silicon for mobile use right now.

This means that the only company taking socketed desktop graphics seriously right now is AMD, who is starting to use its mobile-first Renoir silicon for desktop processors. This involves moving the TDP from 15W/45W up to 65W, and putting it in an AM4 socket package, similar to what AMD has done with its previous APU silicon. But now we get onto a specific issue with AMD’s Ryzen 4000 desktop APUs.

Ryzen 4000 Desktop APUs: Not for General Sale

That’s correct – the Ryzen 4000 desktop APUs from AMD are not available at retail. While AMD announced twelve different model numbers for the latest generation, varying in core count, graphics count, and power, the company has decided not to create special retail packaging and offer them for general consumption.

What AMD has done here is enable these products for two specific markets. Companies like HP, Dell and Lenovo can order these processors from AMD and put them into pre-built systems for consumers like you and I, or they can order the Ryzen PRO versions and build commercial systems with extra management features for corporate management.

By enabling these processors only in pre-built and commercial systems, this allows AMD to have a tighter control on its stock of processors. These companies purchase processors on the scale of tens of thousands, so if a big OEM like HP wants to create a series of pre-built computers, they can put the order in with AMD and AMD will give HP a delivery date. If a product is sold on the open market, then AMD has to work with distribution channels dealing with a scale of tens of units, rather than thousands, making the operation more complex with stock potentially either sitting idle, or not being available if they cannot manufacture enough.

By keeping this hardware as OEM only, AMD can adjust its silicon between desktop and mobile as required with much tighter controls. This is important for a company if the same product in one market (e.g. this silicon in mobile) is worth more than the other, as it focuses the silicon in the mobile market while also meeting contractual demands on the desktop side. Reports of AMD needing more 7nm wafers from TSMC could also play into this, as AMD would rather use those wafers for higher margin products.

So given all this, why test these processors at all? Well the truth is end-users can actually buy them. But it is not as easy as putting an order in at Amazon.

AMD calls its retail product line as PIBs, or ‘product in box’. These parts have a consumer warranty attached, fancy packaging to draw you in, and usually a cooler depending on the product. The other type, which it sells to HP and Dell, is more for business-to-business (B2B) sales, and these processors are called ‘tray’ or ‘OEM’ products. Here AMD just sells the CPU with a basic B2B warranty, no packaging, no cooler. If you are an OEM like Dell, you don’t want to be opening 10000 packages to build 10000 systems, so these processors just come in a tray and that is that.

Retailers that sell CPUs to general consumers will almost certainly carry PIBs. But some retailers, especially those that also make their own pre-built systems, will sell the tray versions as well.

These are sold as CPU only, in a protective case, without a cooler, and often only a limited warranty solely with the retailer (usually 1 year). Stock of these OEM processors is often very transient day-to-day, and some of the bigger retailers will often include third-party sales of these processors as well. It should be noted that direct-to-consumer sales of OEM-style processors tends to be more prevalent in Eastern Europe and Russia than in North America, from personal experience.

Ultimately this is how we sourced these APUs for this review.

How We Acquired the 65 W Ryzen 4000 (Pro) APUs

AMD was not sampling Ryzen 4000 APUs for review, and so we had to scour the internet for a system builder that was also selling the individual hardware. The other alternative was to buy three distinct pre-built systems, but we found a UK retailer that was prepared to sell the processors on their own direct to consumers. Actually we had to fudge it a little bit. Time for a story.

I found a retailer that listed all three processors as ‘awaiting stock’, and all three had dates about a week apart from each other. I could not pre-order them, but I could add them to my basket. I had to wait for stock to arrive before putting in an order. As the first one was enabled on the website, I put in the order for the Ryzen 5 Pro 4650G, and it arrived next day. As soon as I made the order, I put the next one in my basket. One down, two to go, and the other two were expected to arrive over the next two weeks. I kept checking the website daily to ensure that the ETA was consistent – I even emailed the company to confirm the dates. When the second processor was expected to go into stock, I loaded up my basket to see the Ryzen 3 Pro 4350G was no longer there.

I moved on over to the product page, where it was listed as in stock, but the add-to-basket button had been disabled. I was somewhat confused as to what was going on – perhaps AMD had asked them to stop selling the hardware direct to consumers, and to only use it for pre-built systems? I have no idea as to the real reason, but what comes next was an interesting element of trickery.

I went through the website source code to see how items were added to the basket, and noticed that each ‘add-to-basket’ button had an ID related to the stock item. I found the stock item for the Ryzen 3, and adjusted the add-to-basket button of the Threadripper 3990X to point to the Ryzen 3. After a few tries where it didn’t seem to work, it finally did! I had a Ryzen 3 Pro 4350G in my basket. I put in the order, no issues there, and off it went. It arrived next day, and the stock count listed on the website went down by one. The add-to-basket button was still disabled, and I wondered if the retailer had just suspected that I had one in my basket all along and just went along with it.

So a week later the Ryzen 7 Pro 4750G was expected to be in stock. Again, I was checking it daily to see the ETA slowly count down. The day when the stock was supposed to arrive, the whole product page had vanished. All the product pages for the Ryzen 4000 APUs had vanished. What in the world was going on?

I decided to put my previous plan into action a second time – could I modify the add-to-basket product ID to point to the Ryzen 7 Pro 4750G to get it in the basket? Then here was a second problem – I didn’t know the ID for the processor. The basket ID for each product was different to the URL ID, so I had to do some guess work based on the previous two IDs that I had used for the Ryzen 5 and Ryzen 3. It wasn’t as straight forward as the products being sequential, and as mentioned before, trying to get the button to work properly was a bit hit-and-miss.

It took about 10 minutes, and I added a wide variety of processors to my basket, but I did finally get the 4750G in there. It was listed as in stock, for next day delivery. I clicked purchase, handed over my details, and it arrived the next day. There was no questioning from the retailer as to how I put in an order. Clearly a sale is a sale, right?

Now I’m not expecting users to go out and have to work out how their retailer’s website works in order to buy these APUs. The hardware has been out long enough now that there are a number of third-party sellers on leading etailers offering these APUs at a variety of prices. These sellers seem to be focused in the Hong Kong region, which means warranty might be an issue, and shipping import taxes might be a part of bringing it into your country. Some of the sellers have dodgy ratings too. But they are out there, in larger numbers than before.

The AMD Desktop Ryzen 4000 Offerings

As mentioned, AMD launched twelve desktop Ryzen 4000 processors in the family. These were split into six for Ryzen PRO and six not-for-Pro, and in each of those six, three were for 65W and three were for 35W. In each set of three was a Ryzen 7, a Ryzen 5, and a Ryzen 3. AMD is covering all the bases with these parts.

The top of the line is the Ryzen 7 4700G, with eight Zen 2 cores, sixteen threads, and Vega 8 graphics. This processor has a base frequency of 3.6 GHz, a turbo frequency of 4.4 GHz, and a peak graphics frequency of 2100 MHz. This is a substantial graphics frequency jump over the previous generation halo desktop APU, which ran Vega 11 graphics only at 1450 MHz. AMD puts this down to both the advantages of 7nm, but also physical design optimizations of the Vega graphics, providing a better gen-on-gen improvement than expected, which also enables a smaller graphics package which is better fed by the Zen 2 cores.

At the lower end is the Ryzen 3 4300G, with four cores and eight threads, with a base of 3.8 GHz and a turbo of 4.0 GHz, which should mean that performance is very consistent. This part has six compute units for graphics, running at 1700 MHz.

Every 4000G processor at 65 W has a GE counterpart at 35 W, which for the most part reduces the base frequency and TDP only. The exception is the Ryzen 7, where 100 MHz is lost on turbo and 100 MHz is lost on graphics. All the Ryzen non-Pro hardware has a Pro version equivalent.

All of the processors support DDR4-3200 memory, and have 16x PCIe 3.0 lanes for graphics, 4x PCIe 3.0 lanes for storage, and 4x PCIe 3.0 lanes to connect to the chipset. These are PCIe 3.0 connections primarily on the basis of power – this is the same silicon that goes into 15 W mobile processors, and the power draw of PCIe 4.0 would have been too high, so AMD only enabled these processors with a PCIe 3.0 controller.

For this review, we sourced all three of the Ryzen Pro 65 W processors.

Desktop Discrete Graphics vs Integrated Graphics

Due to the difficulty in obtaining these processors, I would assume that anyone obtaining them will be using the integrated graphics in order to get the most out of their purchase. These processors still have 16x PCIe 3.0 lanes for graphics, which means we could stick in a discrete GPU if we wanted. As part of this review, we will test both, if only to see where a Renoir APU would fit if it had access to a full-blown directly connected discrete graphics card.

It is worth noting that AMD has made a big fuss recently with its Zen 3 Ryzen 5000 CPUs, stating that having 32 MB of L3 cache available for each core as being a big improvement to discrete graphics. This is double that of the Zen 2-based Ryzen 4000 CPUs, which enable each core to have access to 16 MB of L3 cache. These Renoir APUs are hamstrung using the same dimension: each Zen 2 CPU core only has access to 4 MB of L3 cache. By contrast, the Renoir APUs are monolithic; the CPUs rely on a chiplet design, which adds latency. This was an AMD design choice, so it will be interesting to see how this works out for performance.

Test Setup and #CPUOverload Benchmarks

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer’s maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds – this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.


The 2020 #CPUOverload Suite

Our CPU tests go through a number of main areas. We cover Web tests using our un-updateable version of Chromium, opening tricky PDFs, emulation, brain simulation, AI, 2D image to 3D model conversion, rendering (ray tracing, modeling), encoding (compression, AES, video and HEVC), office based tests, and our legacy tests (throwbacks from another generation of code but interesting to compare).

The Win10 Pro operating system is prepared in advance, and we run a number of registry edit commands again to ensure that various system features are turned off and disabled at the start of the benchmark suite. This includes disabling Cortana, disabling the GameDVR functionality, disabling Windows Error Reporting, disabling Windows Defender as much as possible again, disabling updates, and re-implementing power options and removing OneDrive, in-case it sprouted wings again.

A number of these tests have been requested by our readers, and we’ve split our tests into a few more categories than normal as our readers have been requesting specific focal tests for their workloads. A recent run on a Core i5-10600K, just for the CPU tests alone, took around 20 hours to complete.


  • Peak Power (y-Cruncher using latest AVX)
  • Per-Core Loading Power using POV-Ray


  • Agisoft Photoscan 1.3: 2D to 3D Conversion
  • Application Loading Time: GIMP 2.10.18 from a fresh install
  • Compile Testing (WIP)


  • 3D Particle Movement v2.1 (Non-AVX AVX2/AVX512)
  • y-Cruncher 0.78.9506 (Optimized Binary Splitting Compute for mathematical constants)
  • NAMD 2.13: Nanoscale Molecular Dynamics on ApoA1 protein
  • AI Benchmark 0.1.2 using TensorFlow (unoptimized for Windows)


  • Digicortex 1.35: Brain stimulation simulation
  • Dwarf Fortress 0.44.12: Fantasy world creation and time passage
  • Dolphin 5.0: Ray Tracing rendering test for Wii emulator


  • Blender 2.83 LTS: Popular rendering program, using PartyTug frame render
  • Corona 1.3: Ray Tracing Benchmark
  • Crysis CPU-Only: Can it run Crysis? What, on just the CPU at 1080p? Sure
  • POV-Ray 3.7.1: Another Ray Tracing Test
  • V-Ray: Another popular renderer
  • CineBench R20
  • CineBench R23


  • Handbrake 1.32: Popular Transcoding tool
  • 7-Zip: Open source compression software
  • AES Encoding: Instruction accelerated encoding
  • WinRAR 5.90: Popular compression tool


  • CineBench R10
  • CineBench R11.5
  • CineBench R15
  • 3DPM v1: Naïve version of 3DPM v2.1 with no acceleration
  • X264 HD3.0: Vintage transcoding benchmark


  • Kraken 1.1: Depreciated web test with no successor
  • Octane 2.0: More comprehensive test (but also deprecated with no successor)
  • Speedometer 2: List-based web-test with different frameworks


  • GeekBench 4 and GeekBench 5
  • AIDA Memory Bandwidth
  • Linux OpenSSL Speed (rsa2048 sign/verify, sha256, md5)
  • LinX 0.9.5 LINPACK

SPEC (Estimated)

  • SPEC2006 rate-1T
  • SPEC2017 rate-1T
  • SPEC2017 rate-nT

It should be noted that due to the terms of the SPEC license, because our benchmark results are not vetted directly by the SPEC consortium, we have to label them as ‘estimated’. The benchmark is still run and we get results out, but those results have to have the ‘estimated’ label.


  • A full x86 instruction throughput/latency analysis
  • Core-to-Core Latency
  • Cache-to-DRAM Latency
  • Frequency Ramping
  • A y-cruncher ‘sprint’ to see how 0.78.9506 scales will increasing digit compute

Some of these tests also have AIDA power wrappers around them in order to provide an insight in the way the power is reported through the test.

2020 CPU Gaming (GPU) Benchmarks

In the past, we’ve tackled the GPU benchmark set in several different ways. We’ve had one GPU to multiple games at one resolution, or multiple GPUs take a few games at one resolution, then as the automation progressed into something better, multiple GPUs take a few games at several resolutions. However, based on feedback, having the best GPU we can get hold of over a dozen games at several resolutions seems to be the best bet.

Normally securing GPUs for this testing is difficult, as we need several identical models for concurrent testing, and very rarely is a GPU manufacturer, or one of its OEM partners, happy to hand me 3-4 of the latest and greatest. In that aspect, over the years, I have to thank ECS for sending us four GTX 580s in 2012, MSI for sending us three GTX 770 Lightnings in 2014, Sapphire for sending us multiple RX 480s and R9 Fury X cards in 2016, and in our last test suite, MSI for sending us three GTX 1080 Gaming cards in 2018.

For our testing on the 2020 suite, we have secured three RTX 2080 Ti GPUs direct from NVIDIA. These GPUs have been optimized for with drivers and in gaming titles, and given how rare our updates are, we are thankful for getting the high-end hardware.  (It’s worth noting we won’t be updating to whatever RTX 3080 variant is coming out at some point for a while yet.)

On the topic of resolutions, this is something that has been hit and miss for us in the past. Some users state that they want to see the lowest resolution and lowest fidelity options, because this puts the most strain on the CPU, such as a 480p Ultra Low setting. In the past we have found this unrealistic for all use cases, and even if it does give the best shot for a difference in results, the actual point where you come GPU limited might be at a higher resolution. In our last test suite, we went from the 720p Ultra Low up to 1080p Medium, 1440p High, and 4K Ultra settings. However, our most vocal readers hated it, because even by 1080p medium, we were GPU limited for the most part.

So to that end, the benchmarks this time round attempt to follow the basic pattern where possible:

  1. Lowest Resolution with lowest scaling, Lowest Settings
  2. 2560×1440 with the lowest settings (1080p where not possible)
  3. 3840×2160 with the lowest settings
  4. 1920×1080 at the maximum settings

Point (1) should give the ultimate CPU limited scenario. We should see that lift as we move up through (2) 1440p and (3) 4K, with 4K low still being quite strenuous in some titles.

Point (4) is essentially our ‘real world’ test. The RTX 2080 Ti is overkill for 1080p Maximum, and we’ll see that most modern CPUs pull well over 60 FPS average in this scenario.

What will be interesting is that for some titles, 4K Low is more compute heavy than 1080p Maximum, and for other titles that relationship is reversed.

For integrated graphics testing, we use the (1) and (4) settings to see where the GPU lies with respect to CPU performance (1) as well as test to confirm just how close integrated graphics is to proper 1080p gaming (4).

So we have the following benchmarks as part of our script, automated to the point of a one-button run and out pops the results approximately 10 hours later, per GPU. Also listed are the resolutions and settings used.

For each of the games in our testing, we take the frame times where we can (the two that we cannot are Chernobylite and FFXIV). For these games, at each resolution/setting combination, we run them for as many loops in a given time limit (often 10 minutes per resolution). Results are then taken as average frame rates and 95th percentiles.

If there are any game developers out there involved with any of the benchmarks above, please get in touch at I have a list of requests to make benchmarking your title easier! I have a literal document I’ve compiled showing what would be ideal, best practices, who gets it correct and who gets it wrong, etc.

The other angle is DRM, and some titles have limits of 5 systems per day. This may limit our testing in some cases; in other cases it is solvable.

Power Consumption

The nature of reporting processor power consumption has become, in part, a dystopian nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP, but are broadly speaking applied the same. The difference comes to turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

In simple terms, processor manufacturers only ever guarantee two values which are tied together – when all cores are running at base frequency, the processor should be running at or below the TDP rating. All turbo modes and power modes above that are not covered by warranty. Intel kind of screwed this up with the Tiger Lake launch in September 2020, by refusing to define a TDP rating for its new processors, instead going for a range. Obfuscation like this is a frustrating endeavor for press and end-users alike.

However, for our tests in this review, we measure the power consumption of the processor in a variety of different scenarios. These include full AVX2/AVX512 (delete as applicable) workflows, real-world image-model construction, and others as appropriate. These tests are done as comparative models. We also note the peak power recorded in any of our tests.

First up is our image-model construction workload, using our Agisoft Photoscan benchmark. This test has a number of different areas that involve single thread, multi-thread, or memory limited algorithms.

Each of our three processors here seems to approach different steady state power levels for the different areas of the benchmark.

  • The 8-core is around 65 W in the first stage, and more around 48 W in the second stage.
  • The 6-core is around 51 W in the first stage, and more around 38 W in the second stage.
  • The 4-core is around 36 W in the first stage, and more around 30 W in the second stage.

The fact that the difference between each of the processors is 14-15 W in the first stage would go a little to suggesting that we’re consuming ~7 W per core in this part of the test, which is strictly multi-threaded. However when it moves more into variable threaded loading, all three CPUs are well below the TDP levels.

The second test is from y-Cruncher, which is our AVX2/AVX512 workload. This also has some memory requirements, which can lead to periodic cycling with systems that have lower memory bandwidth per core options.

The y-Cruncher test is a little different, as we’re mostly concerned about peaks. All three CPUs have a TDP rating of 65 W, however the 8-core here breaches 80 W, the 6-core is around 72 W, and the only processor below that TDP value is the quad core Ryzen 3.

For absolute peak power across all of our tests:

(0-0) Peak Power

For absolute instanteous peak power, each of the Ryzen R4000 APUs does what was expected – with the Ryzen 7 hitting the socket limit for 65 W processors.

Pages ( 1 of 3 ): 1 23Next »


Leave a Reply

Your email address will not be published. Required fields are marked *