Ticker

6/recent/ticker-posts

Intel Core i3-13100 vs. AMD Ryzen 5 5600 Which Cpu is Better & What Are The Differences?

  We recently acquired the Intel Core i3-13100, the latest budget processor from Intel, for testing purposes. This 13th-gen model is essentially a rebadged 12th-gen i3-12100 with a slight bump to clock speeds, but still has the same 1.25MB L2 cache per core compared to the 2MB found in most Raptor Lake CPUs. In truth, calling the Core i3-13100 a Raptor Lake processor isn't entirely accurate, and its 13th-gen status may also be a little misleading. We'll be reviewing the Core i3-13100 as well as the Core i5-13400 and 13500 soon to see what these budget CPUs can offer.

The Core i3-13100 and 12100 are very similar chips, with the main difference being a boost in base and Turbo clocks for the 13100 by 100 MHz and 200 MHz, respectively. Both chips have 5 MB of L2 cache, 12 MB of L3 cache, a max turbo power of 89 watts, support for DDR5-4800 or DDR4-3200 memory, 16 lanes of PCI Express 5.0 and 4 lanes of PCIe 4.0. The 13100 is also more expensive, costing $150 compared to the 12100's $133 price. Despite the small differences, we decided to test the 13100 due to the changes in pricing for lower end parts and our previous recommendation of the 12100.
Instead of following the traditional review format, we decided to conduct a GPU scaling benchmark for direct comparison among the Ryzen 5 5600, Ryzen 5500, and the original Core i3-12100. When the Core i3-12100 was first released, there was not much competition. AMD offered the Ryzen 3 3100 at $175, which was considered overpriced, and the Ryzen 5 5600G was even worse at $260. At the time, Zen 3 was not as well-regarded as it is today with parts like the 5800X3D dropping as low as $330 and the 5600 now being available for $150. However, it was not always that way.

A year ago, neither the Ryzen 5 5600 nor the Ryzen 5 5500 CPUs existed. When they were introduced in April, the 5600 was priced at $200 and the 5500 was priced at $160. Since then, the 5500's price has dropped to $100 and the 5600's price has dropped to $150, which is the same as the new Core i3-13100.

Currently, the Core i3-12100 is priced at $133, the 13100 is priced at $150, and they are competing with the Ryzen 5 5500 at $100 and the 5600 at $150. This is a more competitive market than it was a year ago.

Since this is a GPU scaling benchmark, we are focusing on gaming performance. We have tested a dozen titles at 1080p using the Radeon RX 6650 XT, 6950 XT and RTX 4090. We chose to test at 1080p to minimize the GPU bottleneck and focus on CPU performance. However, by including different GPUs, we can still see the effects of GPU bottlenecks on CPU performance.

For the test systems, we are using DDR4 memory exclusively as it is a more affordable option for sub-$200 processors. Though there are situations where DDR5 would make sense, for the sake of simplicity and consistency, we have chosen to use DDR4 for our comparisons.

We've selected the G.Skill Ripjaws V 32GB DDR4-3600 CL16 memory kit, as it offers a good balance of price and performance at $115. The AM4 processors are able to run this kit at the 3600 spec using the standard CL16-19-19-36 timings. However, the locked Intel processors can't run above 3466 if they are to use the Gear 1 mode. Ideally, you would want to use Gear 1 as the memory has to run well above 4000 before Gear 2 can match Gear 1.

This limitation is due to the fact that the System Agent or SA voltage is locked on Non-K SKU processors, which limits the memory controller's ability to support high frequency memory using the Gear 1 mode. Therefore, we have tested the 12100 and 13100 using the Ripjaws memory adjusted to 3466, as it is the optimal configuration for these processors without manually tuning the sub-timings.

Benchmarks

First, we have the game Watch Dogs: Legion. There are some noteworthy observations from this test. One of the most striking is that in certain instances, the relatively low-end Radeon RX 6650 XT is faster than the high-end RTX 4090. This phenomenon was previously explained in a study on Nvidia driver overhead. In brief, Nvidia's architecture creates more driver overhead for the CPU, resulting in a greater impact on frame rates when performance is limited by the CPU when using a GeForce GPU compared to a Radeon GPU.

In some cases, a lower-end product like the 6650 XT can end up being faster than the RTX 4090 if the CPU is not able to keep up. It's important to note that you wouldn't typically pair a Core i3-13100 with an RTX 4090, but the same effect can be observed with lower-end GeForce GPUs. However, that is not the focus of this analysis.

What we want to examine is the performance of the Ryzen 5 5600 and Core i3-13100 CPUs. Using the 6650 XT, we see that the 5600 is 6% faster in terms of average frame rate, but 17% faster in terms of 1% lows. This is a significant margin considering the GPU being used.

When we upgrade to the 6950 XT, the average frame rate margin increases to 25% in favor of the Ryzen 5600 and 30% for the 1% lows. We see similar margins using the RTX 4090, even though the overall frame rates are lower. In this test, the Ryzen 5600 is 26% faster than the Core i3-13100, or 28% faster when comparing the 1% lows.

Additionally, we see that the Core i3-13100 is at most 2% faster than the i3-12100 in this testing, while the Ryzen 5 5500 comfortably beats both Core i3 processors.

The graphical scaling of Warhammer III differs from the visuals seen in Watch Dogs. When set to the high quality preset, the Radeon 6650 XT is capped at 80 FPS, and all four processors were able to reach the same limit - demonstrating a strong GPU bottleneck. Upgrading to the Radeon 6950 XT largely eliminates this limitation, and the Ryzen 5600 is 9% faster than the Intel Core i3-13100 in terms of average frame rate, and 20% faster for the 1% lows. Using the RTX 4090 pushes these margins even further, with a 23% advantage in terms of average frame rate and 25% faster low frame rates for the Ryzen 5.
 Hitman 3 performs similarly across all four CPUs with the 6650 XT, averaging around 110 fps. However, with the 6950 XT the performance is drastically different, with the Ryzen 5500 matching the Core i3s while the Ryzen 5600 was 14% faster than the i3-13100 and a massive 24% faster in terms of 1% lows. However, when paired with the RTX 4090 the margins close up, although the 1% lows take a hit, which may be due to the additional overhead or could be an issue with the NVIDIA drivers. Comparatively, the Ryzen 5600 is still 12% faster than the Core i3-13100 for average frame rates, and 8% faster for 1% lows.
A Plague Tale: Requiem is a demanding game for both the GPU and CPU, even when using ultra quality settings. The Radeon 6650 XT only manages to reach just over 60 fps, and the Ryzen 5 5500 struggles due to its smaller L3 cache. However, when it comes to GPU performance, the Radeon 6950 XT is faster than the GeForce RTX 4090 at 1080p. In fact, the 5600 was 25% faster with the Radeon GPU. This is not surprising based on previous benchmark results. In terms of CPU performance, the Ryzen 5 5600 and the Core i3-13100 appear to be on par with each other in this game.
 When using the Radeon 6650 XT in Call of Duty Modern Warfare II, performance was very consistent regardless of the processor used, with the Ryzen 5600 only being 6% faster than the Core i3-13100. However, when the Radeon 6950 XT was used, the Ryzen 5600 gained a 20% advantage in the average frame rate, and a 22% advantage for the 1% lows compared to the Core i3-13100. Additionally, with the RTX 4090, the Ryzen 5600 was 14% faster for the average frame rate and 33% faster for the 1% lows. This suggests the additional overhead of the GeForce affects the performance of the Core i3s in this test.
 Ray tracing effects used for this testing place high demand on both the CPU and GPU. Surprisingly, the Core i3 processors outperformed the Ryzen 5 5600 in tests with the Radeon GPUs, with the Radeon 6950 XT showing an 11% margin of victory. However, with the RTX 4090, the GeForce GPU had a significant impact on the 1% lows of the Core i3s, resulting in the Ryzen 5600 having a 20% better performance.

 Shadow of the Tomb Raider, released in 2018, remains one of the most CPU-demanding single-player games available. The built-in benchmark isn't suitable for testing its CPU usage as it is primarily a GPU benchmark. Therefore, testing was conducted in the Village section which pushed the CPU to its limits. The Radeon 6650 XT achieved 110 FPS on the highest quality preset while all four CPUs tested were able to meet that target. When the Radeon 6950 XT was used, the Core i3 processors were found to be lacking compared to the Ryzen 5600 which reached 21% faster than the 13100. When the RTX 4090, which intensified CPU load, was used, the Ryzen 5600 was found to be 27% faster than the 13100 and 36% faster when considering 1% lows.
 The Radeon 6650 XT allowed for a maximum of 120-130 fps when running Horizon Zero Dawn on all four CPUs. However, when the Radeon 6950 XT was used, it resulted in a much better visual performance, with the 12100, 13100 and 5500 reaching an average of 160 fps and the Ryzen 5600 going even further to render 189 fps. This makes the Ryzen 5600 12% faster than the i3-13100 and 22% faster than the RTX 4090.


In the realm of video games, Cyberpunk 2077 is highly dependent on the capabilities of a computer's GPU. The game's performance on the Radeon 6650 XT was tested even when using medium quality settings. Results showed that all four CPUs performed better with the Radeon 6950 XT as opposed to the RTX 4090. Specifically, the Ryzen 5600 was 12% faster when using the Radeon GPU and 15% faster than the Core i3-13100. This demonstrates the efficiency of Zen 3 processors in this scenario.

Assetto Corsa Competizione is a driving simulator that puts a heavy load on the CPU. Our tests using the medium quality preset showed that it is primarily CPU-bound on modern hardware.

The frame rates were almost the same when using either the Radeon 6650 XT or the 6950 XT. However, when using the faster of the two GPUs, the Ryzen 5600 was only 7% faster than the 13100. The Ryzen 5500, on the other hand, did not perform well in this test. L3 cache performance is crucial in this game, which is why the 5500 struggled while components like the 5800X3D performed exceptionally well.

The Riftbreaker is a game that heavily relies on the CPU, with an emphasis on single core performance. This is evident when comparing the Ryzen 5600 and the Core i3-13100. However, the Ryzen 5500 struggles in this game due to its smaller L3 cache, which results in lower IPC (Instructions Per Clock).
Counter-Strike is a game that is primarily limited by the CPU because the graphics have not been significantly updated in a long time. Using the Radeon 6650 XT, our tests showed that the Ryzen 5600 was 17% faster than the Core i3-13100. When comparing 1% lows, the Ryzen 5600 was 11% faster.

However, the Ryzen 5500 was the slowest of the four CPUs tested and the Core i3-13100 was up to 19% faster. Despite this, the overall performance between the Ryzen 5500 and the Core i3s was relatively similar.

The 12 game average performance of the Radeon 6650 XT was consistent across all four CPUs, as the GPU was the bottleneck. However, there was some variation, particularly for the Ryzen 5500. On average, the Ryzen 5600 was only 3% faster than the Core i3-13100, and any difference within 5% is considered a tie.

The Ryzen 5600 outperforms the Core i3-13100 by 11% when the GPU bottleneck is largely removed with the Radeon 6950 XT. This margin increases to 12% when looking at 1% lows. Similar performance advantages were observed with the GeForce RTX 4090.

Conclusion

the Ryzen 5 5600 is currently the best budget CPU option on the market, with its price dropping as low as $120 and currently selling for $150. The Core i3-13100 is a new entry to the market at the same price point, but it is not a strong enough competitor to the Ryzen 5 5600. Both the AMD AM4 platform and the Core i3s can be upgraded in the future for similar performance gains, but the Ryzen 5 5500 is a hard to beat option for those looking to save money. GPU scaling benchmark results did not show anything out of the ordinary. The Radeon 7900 XTX was not included as it is an expensive GPU only used for scientific testing and future-proofing.

Post a Comment

0 Comments