Summary

  • Intel sold off significantly recently due to news of Nvidia entering the (data center) CPU business.
  • I consider the sell-off a short-term overreaction for multiple reasons.
  • Primarily, the CPU won’t launch for two more years, and Nvidia itself said it is targeted at niche applications.
  • Moreover, Nvidia’s inroad in CPUs should actually indicate to investors that Nvidia is playing catch-up to the leader in AI on CPUs, which is Intel.
  • Previous sell-offs have turned out to be overreactions: Apple Silicon, 7nm delay, Microsoft chip rumor, etc.
    Intel: Nvidia CPU Scare Is Smoke And Mirrors

    Investment Thesis

    In my most recent Intel (INTC) coverage, I have been hesitant to recommend the shares given they are near their all-time high: I argued the market might have moved ahead of itself a bit in the anticipation of Intel’s return to leadership under Pat Gelsinger, which obviously won’t happen overnight (unlike the rally in recent months).

    To that end, recently there has been some other news that may present exactly such a “buy the dip” opportunity. Nvidia (NVDA) made headlines with its announcement of entering the data center CPU business.

    However, for several reasons, investors should ignore this news, as it simply won’t meaningfully impact Intel for at least another half a decade. Hence, the sell-off (which happened within minutes of the announcement, indeed indicating that the news wasn’t carefully analyzed) represents a short-term overreaction by the market.

    What happened previously

    I have previously covered the collision course of these companies in quite a bit of depth: Nvidia Vs. Intel: The Semi Battle Of The Decade (NASDAQ:INTC).

    Relevant to that analysis is that the present announcement indeed confirms that Nvidia doesn’t need to acquire Arm altogether to make its own CPUs.

    Nvidia CPU announcement

    Nvidia announced Grace, its first data center CPU. The headline claim was that it is touted to deliver 10x the performance compared to today’s fastest CPUs.

    At first sight, this indeed seems quite worrying for the established leader in server CPUs, Intel. It has already been known for a while that competition from not only AMD (AMD), but also Arm is increasing: Amazon (AMZN) AWS Graviton, Apple (AAPL) Silicon, Ampere, etc. Another offering that may leapfrog Intel, especially one with Nvidia’s weight behind it, may further indicate the end of Intel’s data center dominance. Indeed, Nvidia’s data center business has already been growing around triple digits in the last few quarters, boosted by the Mellanox acquisition.

    However, this is where the caveats start, and there are a few. First, the 10x claim is based on AI training performance. The reason is that the Grace CPU is not targeted as a general-purpose CPU, but instead is designed to be paired with Nvidia’s own GPUs for AI workloads. Due to this co-design, by leveraging NVLink, the overall platform will enable 900 GB/s of bandwidth between the CPU and GPU.

    This explains why Nvidia is venturing into CPUs in the first place: Intel CPUs obviously do not have Nvidia’s NVLink.

    Secondly, what this means is that it is unlikely the 10x performance Nvidia claims is entirely attributable to the CPU. Instead, it is the overall system performance (the combination of Grace and likely Hopper) that is increased, as that is the goal of including these high-speed links between the two chips. Hence, in pure CPU benchmarks, it is unlikely the Grace CPU will have much, if any, of an advantage at all. That is simply not what the CPU is really aimed at. Nvidia likely won’t even sell it is as a standalone CPU.

    Indeed, AnandTech compared Grace against the 2019 AMD Rome CPU:

    But other than that, all the company is saying is that the cores should break 300 points on the SPECrate2017_int_base throughput benchmark, which would be comparable to some of AMD’s second-generation 64 core EPYC CPUs.

    Thirdly, according to Nvidia, it isn’t just targeted at any AI workload. Rather, it is specifically designed for the next-gen largest AI models, which will have trillions of parameters. These models will the trained by AI supercomputers. Nvidia indeed announced such a design win. In other words, with AI being just a subset of the overall data center market, this CPU is further aimed at just a subset of the AI/HPC market.

    Lastly, in Nvidia’s very own words, this is a niche CPU:

    While the vast majority of data centers are expected to be served by existing CPUs, Grace – named for Grace Hopper, the U.S. computer-programming pioneer – will serve a niche segment of computing.

    No increase in competition

    One other significant point to be aware of is that Grace is based on the Arm Neoverse platform.

    Why (is this significant)? Because it means that while, sure, a new vendor will be offering CPUs for the data center, the underlying technology is not different from what all other Arm-based companies use: AWS Graviton, Ampere. Everyone uses the same Neoverse technology.

    This actually implies that not Arm but in fact, x86 is the healthier platform with two distinct vendors with their own technology and offerings. In other words, why would the world switch from x86, which clearly is the most competitive platform, to Arm, where there really is only one independent technology stack? For years people had been complaining about the Intel monopoly, but that isn’t a monopoly anymore. On the other hand, given Qualcomm’s (QCOM) Nuvia acquisition, in the Arm world, there is nothing besides Neoverse.

    Intel’s response

    Still, with Nvidia not just entering the CPU business (“NVIDIA is now a three-chip company”, as its CEO said), but also tightly integrating CPU and GPU, investors may wonder what Intel would have in response, given Intel’s own inroads in GPUs.

    The first answer is Ponte Vecchio. While this is a GPU, not a CPU like Grace, this GPU already contains a lot of technology to facilitate close integration of the chip with Intel’s Xeon chips. And just like the Grace-Hopper combination, Ponte Vecchio likewise is aimed at HPC and AI.

    In particular, Ponte Vecchio in fact already contains some CPU-derived hardware on the chip itself: it contains SIMD processing units, which are usually only found on CPUs. Secondly, to enable tight interconnection with the actual Xeons, Ponte Vecchio contains the CXL-based Xe Link.

    Additionally, note that Intel is introducing its own equivalent of Nvidia’s Tensor Cores for AI acceleration with Sapphire Rapids: its Advanced Matrix Extensions or AMX. With the recent launch of Ice Lake-SP, Intel claimed that this CPU already achieves a 1.5x geomean performance increase compared to Nvidia’s latest A100 on a set of 20 AI/ML workloads, and Intel has indicated AMX will bring another 4-8x performance speed-up. Hence, Intel is already the leader in AI on CPUs, and can compete favorably against even the fastest GPUs, and will soon further solidify this lead with Sapphire Rapids. By the time Grace launches, Intel will have moved to its own next-gen Granite Rapids.

    In other words, as Nvidia’s inroads in CPUs with Grace (and the Arm acquisition) shows, Nvidia seems to have realized that CPUs are actually still quite important in this heterogenous compute era. From that view, investors should observe that this move really is about Nvidia playing catch-up to Intel, not the other way around.

    After writing this article, an interview with Intel CEO Pat Gelsinger popped up that expressed exactly the same:

    We announced our Ice Lake [a new microprocessor for servers] last week with an extraordinarily positive response. And in Ice Lake, we have extraordinary expansions in the A.I. capabilities. [Nvidia is] responding to us. It’s not us responding to them. Clearly this idea of CPUs that are A.I.-enhanced is the domain where Intel is a dramatic leader.

    Lastly, I saw some remarks in the press that the 20 exaflops AI supercomputer that will be built with Grace (and Hopper) is over an order of magnitude faster than the currently announced exascale supercomputers. However, investors should note that such comparisons are misleading. Nvidia’s 20 exaflops are based on AI performance, which is measured by the FP16 or even INT8 performance. (The number refers to the number of bits used to represent a number; the smaller, the less transistors required to do math with such numbers.)

    Instead, the exaflops specification of supercomputers like the Intel-powered Aurora is based on FP64 performance. But since Intel’s Ponte Vecchio also supports FP16 or INT8, that means its AI performance should also be several exaflops.

    NVLink

    While certainly a valid way to improve system performance, note that Nvidia’s strategy is built for a large part around its proprietary NVLink interconnect, as Grace shows. Other vendors such as AMD have their own equivalent such as Infinity Fabric.

    Intel may be one of the few notable exceptions, as Intel is moving squarely forward with its open interconnect standard CXL, based on PCIe 5.0. Intel has also already open sourced its AIB protocol that is used with its EMIB 2.5D interconnect for chiplets.

    Such a more open interconnect strategy is also what has enabled Intel’s Habana to leapfrog Nvidia in performance per dollar, because it integrates the common Ethernet interconnect on-chip. There is no need for proprietary vendor-specific NVSwitches. As Habana’s CEO has said, its 16nm Gaudi achieves 70% of the performance of Nvidia’s A100, but at a significantly lower cost.

    Let the weak hands sell their Intel shares

    It is becoming quite a regular pattern. Anything of the slightest bearish news, and the stock sheds billions of market cap (on a whim): the 7nm delay, low guidance, rumors of Microsoft (MSFT) working on Arm chips, the Nvidia CPU news, etc. Note that such events have often presented valid buy the dip opportunities.

    From that view, it will be interesting to see if Nvidia’s stock will behave similarly when Intel soon launches its Xe HP and Xe HPG chips to make inroads in Nvidia’s markets. If it doesn’t (behave similarly), then that would imply that lots of weak hands own Intel shares.

    (That would present a more general lesson in investing that has little to do with Intel: great investments often come from conviction in the long-term success of a company.)

    Investor Takeaway

    Intel sold off by around 5% of its market cap the moment Nvidia announced its Grace CPU. However, upon review, it should be clear to all that this isn’t the CPU that poses a real threat in any reasonable way to Intel (to warrant a sudden sell-off of this magnitude). Even if it was just because it won’t even launch for another two years. Even Nvidia said it is a niche CPU. Don’t get scared by the FUD.

    As such, I expect this news to quickly disappear to the background in terms of things that may tangibly affect Intel’s stock performance going forward, such as the explosive PC growth in Q1.

    Notably, Intel has seen quite a strong rally over the last few months. It is of course hard to predict if and how much further the rally may go, but for those who believe it may go further, the sell-off presents an obvious buy the dip opportunity. Just like some of the previous sell-offs have turned out to be in the last few quarters.

By FYIPC

Leave a Reply

Your email address will not be published. Required fields are marked *