Is Nvidia Over Priced? | 3 Shocking Truths Revealed!

The current stock price of Nvidia is around $339 with a market evaluation of $209 billion. Comparatively, Intel is valued at $245 billion with a stock price of $58 despite Intel having 7x the revenue of Nvidia. This leads analyst to ask the basic question, is Nvidia overpriced? In this article, we compare our research on Nvidia to a post by Reddit user u/HyperInflation2020 first posted here. We ask the fundamental question, is Nvidia overpriced? and come out with a resounding conclusion. This article is part of our Market Research Series provided to our Investors across the globe.

Is Nvidia Overpriced
Is Nvidia Overpriced

How many people really understand what they’re buying, especially when it comes to highly specialized hardware companies like NVidia? Most NVidia investors seem to be relying on a vague idea of how the company should thrive “in the future”, as their GPUs are ostensibly used for Artificial Intelligence, Cloud, holograms, etc. Having been shocked by how this company is represented in the media, I decided to lay out how this business works, doing my part to fight for reality. With what’s been going on in markets, I don’t like my chances but here goes:

Let’s start with…

How does NVidia (NVDA) make money? |Is Nvidia Overpriced?

NVDA is in the business of semiconductor design. As a simplified image in your head, you can imagine this as designing very detailed and elaborate posters. Their engineers create circuit patterns for printing onto semiconductor wafers. NVDA then pays a semiconductor foundry (the printer – generally TSMC) to create chips with those patterns on them.

Simply put, NVDA’s profits represent the difference between the price at which they can sell those chips, less the cost of printing and less the cost of paying their engineers to design them.

Notably, after the foundry prints the chips, NVDA also has to pay (I say pay, but really it is more like “sell at a discount to”) their “add-in board” (AIB) partners to stick the chips onto printed circuit boards (what you might imagine as green things with a bunch of capacitors on them). That leads to the final form in which buyers experience the GPU.

What is a GPU?

NVDA designs chips called GPUs (Graphical Processing Units). Initially, GPUs were used for the rapid processing and creation of images, but their use cases have expanded over time. You may be familiar with the CPU (Central Processing Unit).

CPUs sit at the core of a computer system, doing most of the calculation, taking orders from the operating system (e.g. Windows, Linux), etc. AMD and Intel make CPUs. GPUs assist the CPU with certain tasks. You can think of the CPU as having a few giant very powerful engines. The GPU has a lot of small much less powerful engines.

Sometimes you have to do a lot of really simple tasks that don’t require powerful engines to complete. Here, the act of engaging the powerful engines is a waste of time, as you end up spending most of your time revving them up and revving them down. In that scenario, it helps the CPU to hand that task over to the GPU in order to “accelerate” the completion of the task.

The GPU only revs up a small engine for each task and is able to rev up all the small engines simultaneously to knock out a large number of these simple tasks at the same time. Remember the GPU has lots of engines. The GPU also has an edge in interfacing a lot with memory but let’s not get too technical.

Who uses NVidia (NVDA’s) GPUs?

There are two main broad end markets for NVDA’s GPUs – Gaming and Professional. Let’s dig into each one:

The Gaming Market:

A Bit of Ancient History (Skip if found unnecessary)

GPUs were first heavily used for gaming in arcades. They then made their way to consoles and finally PCs. NVDA started out in the PC phase of GPU gaming usage. They weren’t the first company in the space, but they made several good moves that ultimately led to a very strong market position. Firstly, they focused on selling into OEMs – guys like the equivalent of today’s DELL/HP/Lenovo –, which allowed a small company to get access to a big market without having to create a lot of relationships.

Secondly, they focused on the design aspect of the GPU, and relied on their Asian supply chain to print and package the chip as well as to install it on a printed circuit board – the Asian supply chain ended up being the best in semis.

But the insight that really let NVDA dominate was noticing that some GPU manufacturers were focusing on keeping hardware-accelerated Transform and Lighting as a Professional GPU feature. As a start-up, with no professional GPU business to disrupt, NVidia decided their best ticket into the big leagues was blowing up the market by including this professional-grade feature into their gaming product.

It worked – and this was a real masterstroke – the visual and performance improvements were extraordinary. 3DFX, the initial leader in PC gaming GPUs, was vanquished, and importantly it happened when funding markets shut down with the tech bubble bursting and after 3DFX made some large ill-advised acquisitions. Consequently, 3DFX went from hero to zero, and NVDA bought them for a pittance out of bankruptcy, acquiring the best IP portfolio in the industry.

Some more Modern History

This is what NVDA’s pure gaming card revenue looks like over time – NVDA only really broke these out in 2005 (note by pure, this means ex-Tegra revenues).

So what is the history here? Well, back in the late 90s when GPUs were first invented, they were required to play any 3D game.

As discussed in the early history above, NVDA landed a hit product to start with early and got a strong burst of growth: revenues of 160M in 1998 went to 1900M in 2002. But then NVDA ran into strong competition from ATI (later purchased and currently owned by AMD).

While NVDA’s sales struggled to stay flat from 2002 to 2004, ATI’s doubled from 1Bn to 2Bn. NVDA’s next major win came in 2006, with the 8000 series. ATI was late with a competing product, and NVDA’s sales skyrocketed – as can be seen in the graph above.

With ATI being acquired by AMD they were unfocused for some time, and NVDA was able to keep their lead for an extended period. Sales slowed in 2008/2009 but that was due to the GFC – people don’t buy expensive GPU hardware in recessions.

And then we got to 2010 and the tide changed. Growth in desktop PCs ended. Here is a chart from Statista.

This resulted in two negative secular trends for Nvidia. Firstly, with the decline in popularity of desktop PCs, growth in gaming GPUs faded as well (Here is a chart from Jon Peddie). Note that NVDA sells discrete GPUs, aka DT (Desktop) Discrete. Integrated GPUs are mainly made by Intel (these sit on the motherboard or with the CPU).

NVidia Desktop GPU sales Comparism

You can see from the chart above that discrete desktop GPU sales are fading faster than integrated GPU sales. This is the other secular trend hurting NVDA’s gaming business.

Integrated GPUs are getting better and better, taking over a wider range of tasks that were previously the domain of the discrete GPU. Surprisingly, the most popular eSports game of recent times – Fortnite – only requires Intel HD 4000 graphics – an Integrated GPU from 2012!

So at this point you might go back to NVDA’s gaming sales, and ask the question: What happened in 2015? How is NVDA overcoming these secular trends?

The answer consists of a few parts. First, AMD dropped the ball in 2015. As you can see in this chart, sourced from 3DCenter, AMD market share was halved in 2015, due to a particularly poor product line-up.

AMD market Chart

Following this, NVDA came out with Pascal in 2016 – a very powerful offering in the mid to high-end part of the GPU market. At the same time, AMD was focusing on rebuilding and had no compelling mid or high-end offerings.

AMD mainly focused on maintaining scale in the very low end. Following that came 2017 and 2018: AMD’s offering was still very poor at the time, but crypto mining drove demand for GPUs to new levels, and AMD’s GPUs were more compelling from a price-performance standpoint for crypto mining initially, perversely leading to AMD gaining share.

NVDA quickly remedied that by improving their drivers to better mine crypto, regaining their relative positioning, and profiting in a big way from the crypto boom. Supply that was calibrated to meet gaming demand collided with crypto mining demand and Average Selling Prices of GPUs shot through the roof. Cryptominers bought the top of the line GPUs aggressively.

A good way to see changes in crypto demand for GPUs is the mining profitability of Ethereum:

Profitability of Mining Ethereum

This leads us to where we are today. 2019 saw gaming revenues drop for NVDA as well as drop-in crypto-currency mining. Where are they likely to head?

The secular trends of falling desktop sales along with falling discrete GPU sales have reasserted themselves, as per the Jon Peddie research above. Cryptomining profitability has collapsed.

AMD has come out with a new architecture, NAVI, and the 5700XT – the first Iteration, competes effectively with NVDA in the mid-high end space on a price/performance basis. This is the first real competition from AMD since 2014.

NVDA can see all these trends, and they tried to respond. Firstly, with volumes clearly declining, and likely with a glut of second-hand GPUs that can make their way to gamers over time from the crypto space, NVDA decided to pursue a price over volume strategy. They released their most expensive set of GPUs by far in the latest Turing series.

They added a new feature, Ray Tracing, by leveraging the Tensor Cores they had created for Professional uses, hoping to use that as justification for higher prices (more on this in the section on Professional GPUs). Unfortunately for NVDA, gamers have responded quite poorly to Ray Tracing – it caused performance issues, had poor support, poor adoption, and the visual improvements in most cases are not particularly noticeable or relevant.

The last recession led to gaming revenues falling 30%, despite NVDA being in a very strong position at the time vis-à-vis AMD – this time around their position is quickly slipping and it appears that the recession is going to be bigger. Additionally, the shift away from discrete GPUs in gaming continues.

To make matters worse for NVDA, AMD won the slots in both the New Xbox and the New PlayStation, coming out later this year. The performance of just the AMD GPU in those consoles looks to be competitive with NVidia products that currently retail for more than the entire console is likely to cost. Consider that usually, you have to pair that NVidia GPU with a bunch of other expensive hardware. The pricing and margin impact of this console cycle on NVDA is likely to be very substantially negative.

It would be prudent to assume a greater than 30% fall in gaming revenues from the very elevated 2019 levels, with likely secular decline to follow.

The Professional Market:

A Bit of Ancient History (again, skip if found unnecessary)

As it turns out, graphical accelerators were first used in the Professional market, long before they were employed for Gaming purposes. The big leader in the space was a company called Silicon Graphics, who sold workstations with custom silicon optimized for graphical processing.

Their sales were only $25Mn in 1985, but by 1997 they were doing 3.6Bn in revenue – truly exponential growth. Unfortunately for them, from that point on, discrete GPUs took over, and their highly engineered, customized workstations looked exorbitantly expensive in comparison. Sales sank to 500mn by 2006 and, with no profits in sight, they ended up filing for bankruptcy in 2009. Competition is harsh in the semiconductor industry.

Initially, the Professional market centered on visualization and design but it has changed over time. There were a lot of players and a lot of nuances but I am going to focus on more recent times, as they are more relevant to NVidia.

Some More Modern History

NVDA’s Professional business started after its gaming business, but we don’t have revenue disclosures that show exactly when it became relevant. This is what we do have – going back to 2005:

NVidia Revenue

In the beginning, Professional revenues were focused on the 3D visualization end of the spectrum, with initial sales going into workstations that were edging out the customized builds made by Silicon Graphics. Fairly quickly, however, GPUs added more and more functionality and started to turn into general parallel data processors rather than being solely optimized towards graphical processing.

As this change took place, people in scientific computing noticed and started using GPUs to accelerate scientific workloads that involve very parallel computation, such as matrix manipulation. This started at the workstation level but by 2007 NVDA decided to make a new line-up of Tesla series cards specifically suited to scientific computing. The professional segment now have several points of focus:

  1. GPUs used in workstations for things such as CAD graphical processing (Quadro Line)
  2. GPUs used in workstations for computational workloads such as running engineering simulations (Quadro Line)
  3. GPUs used in workstations for machine learning applications (Quadro line.. but can use gaming cards as well for this)
  4. GPUs used by enterprise customers for high-performance computing (such as modeling oil wells) (Tesla Line)
  5. GPUs used by enterprise customers for machine learning projects (Tesla Line)
  6. GPUs used by hyperscalers (mostly for machine learning projects) (Tesla Line)

In more recent times, given the expansion of the Tesla line, NVDA has broken up reporting into Professional Visualisation (Quadro Line) and Datacenter (Tesla Line). Here are the revenue splits since that reporting started:

NVidia Quadro
NVidia Tesla

It is worth stopping here and thinking about the huge increase in sales delivered by the Tesla line. The reason for this huge boom is the sudden increase in interest in numerical techniques for machine learning.

Let’s go on a brief detour here to understand what machine learning is because a lot of people want to hype it but not many want to tell you what it actually is. I have the misfortune of being very familiar with the industry, which prevented me from buying into the hype. Oops – sometimes it really sucks being educated.

What is Machine Learning?

At a very high level, machine learning is all about trying to get some sort of insight out of data. Most of the core techniques used in machine learning were developed a long time ago, in the 1950s and 1960s. The most common machine learning technique, which most people have heard of and may be vaguely familiar with, is called regression analysis.

Regression analysis involves fitting a line through a bunch of data points. The most common type of regression analysis is called “Ordinary Least Squares” OLS regression, and that type of regression has a “closed form” solution, which means that there is a very simple calculation you can do to fit an OLS regression line to data.

As it happens, fitting a line through points is not only easy to do, it also tends to be the main machine learning technique that people want to use because it is very intuitive. You can make good sense of what the data is telling you and can understand the machine learning model you are using. Obviously, regression analysis doesn’t require a GPU!

However, there is another consideration in machine learning: if you want to use a regression model, you still need a human to select the data that you want to fit the line through. Also, sometimes the relationship doesn’t look like a line, but rather it might look like a curve. In this case, you need a human to “transform” the data before you fit a line through it in order to make the relationship linear.

So people had another idea here: what if instead of getting a person to select the right data to analyze and the right model to apply, you could just get a computer to do that? Of course, the problem with that is that computers are really stupid.

They have no preconceived notion of what data to use or what relationship would make sense, so what they do is TRY EVERYTHING! And everything involves trying a hell of a lot of stuff. And trying a hell of a lot of stuff, most of which is useless garbage, involves a huge amount of computation. People tried this for a while through to the 1980s, decided it was useless, and dropped it… until recently.

What changed? Well we have more data now, and we have a lot more computing power, so we figured let’s have another go at it. As it happens, the premier technique for trying a hell of a lot of stuff (99.999% of which is garbage you throw away) is called “Deep Learning”. Deep learning is SUPER computationally intensive, and that computation happens to involve a lot of matrix multiplication. And guess what just happens to have been doing a lot of matrix multiplication? GPUs!

Here is a chart that, for obvious reasons, lines up extremely well with the boom in Tesla GPU sales:

is Nvidia Overpriced
Deep Learning

Now we need to realize a few things here. Deep Learning is not a magic silver bullet. There are specific applications where it has proven very useful – primarily areas that have a very large number of very weak relationships between bits of data that sum up into strong relationships.

An example of one of those is Google Translate. On the other hand, in most analytical tasks, it is most useful to have an intuitive understanding of the data and to fit a simple and sensible model to it that is explainable.

Deep learning models are not explainable in an intuitive manner. This is not only because they are complicated, but also because their scattershot technique of trying everything leaves a huge amount of garbage inside the model that cancels itself out when calculating the answer, but it is hard to see how it cancels itself out when stepping through it.

Given the quantum of hype on Deep learning and the space in general, many companies are using “Deep Learning”, “Machine Learning” and “AI” as marketing. Not many companies are actually generating significant amounts of tangible value from Deep Learning.

Back to the Competitive Picture

For the Tesla Segment

So NVDA happened to be in the right place at the right time to benefit from the Deep Learning hype. They happened to have a product ready to go and were able to charge a pretty penny for their product. But what happens as we proceed from here?

Firstly, it looks like the hype from Deep Learning has crested, which is not great from a future demand perspective. Not only that, but we really went from people having no GPUs, to people having GPUs. The next phase is people upgrading their old GPUs. It is much harder to sell an upgrade than to make the first sale.

Not only that, but GPUs are not the ideal manifestation of silicon for Deep Learning. NVDA themselves effectively admitted that with their latest iteration in the Datacentre, called Ampere. High-Performance Computing, which was the initial use case for Tesla GPUs, was historically all about double-precision floating point calculations (FP64). High precision calculations are required for simulations in aerospace/oil & gas/automotive.

NVDA basically sacrificed High Precision Computing (HPC) and shifted further towards Deep Learning with Ampere. The FP64 performance of the A100 (the latest Ampere chip) increased a fairly pedestrian 24% from the V100, increasing from 7.8 to 9.7 TF. Not a surprise that NVDA lost El Capitan to AMD, given this shift away from a focus on HPC. Instead, NVDA jacked up their Tensor Cores (i.e. not the GPU cores) and focused very heavily on FP16 computation (a lot less precise than FP64).

As it turns out, FP16 is precise enough for Deep Learning, and NVDA recognizes that. The future industry standard is likely to be BFloat 16 – the format pioneered by Google, who lead in Deep Learning. Ampere now does 312 TF of BF16, which compares to the 420 TF of Google’s TPU V3 – Google’s Machine Learning specific processor. Not quite up to the 2018 board from Google, but getting better – if they cut out all of the Cuda cores and GPU functionality maybe they could get up to Google’s spec.

And indeed this is the problem for NVDA: when you make a GPU it has a large number of different use cases, and you provide a single product that meets all of these different use cases. That is a very hard thing to do and explains why it has been difficult for competitors to muscle into the GPU space.

On the other hand, when you are making a device that does one thing, such as deep learning, it is a much simpler thing to do. Google managed to do it with no GPU experience and is still ahead of NVDA. It is likely that Intel will be able to enter this space successfully, as they have widely signaled with the Xe.

There is, the other large negative driver for Deep Learning, and that is the recession we are now in. Demand for GPU instances on Amazon has collapsed across the board, as evidenced by the fall in pricing. The below graph shows one example: this data is for renting out a single Tesla V100 GPU on AWS, which is the typical thing to do in an early exploratory phase for a Deep Learning model:

Tesla V100 GPU purchase on AWS

With Deep Learning not delivering near-term tangible results, it is the first thing being cut. On their most recent conference call, IBM noted weakness in their cognitive division (AI) and noted weaker sales of their power servers, which is the line that houses Enterprise GPU servers at IBM. Facebook canceled their AI residencies for this year, and Google pushed theirs out. Even if NVDA can put in a good quarter due to its new product rollout (Ampere), the future is rapidly becoming a very stormy place.

For the Quadro segment

The Quadro segment has been a cash cow for a long time, generating dependable sales and solid margins. AMD just decided to rock the boat a bit. Sensing NVDA’s focus on Deep Learning, AMD seems to be focusing on HPC – the Radeon VII announced recently with a price point of $1899 takes aim at NVDAs most expensive Quadro, the GV100, priced at $8999. It does 6.5 TFLOPS of FP64 Double-precision, whereas the GV100 does 7.4 – talk about shaking up a quiet segment.

Pulling things together

Let’s go back to what NVidia fundamentally does – paying their engineers to design chips, getting TSMC to print those chips, and getting board partners in Taiwan to turn them into the final product.

We have seen how a confluence of several pieces of extremely good fortune lined up to increase NVidia’s sales and profits tremendously: first on the Gaming side, weak competition from AMD until 2014, coupled with a great product in form of Pascal in 2016, followed by a huge crypto driven boom in 2017 and 2018, and on the Professional side, a sudden and unexpected increase in interest in Deep Learning driving Tesla demand from 2017-2019 sky-high.

It is worth noting what these transient factors have done to margins. When unexpected good things happen to a chip company, sales go up a lot, but there are no costs associated with those sales. Strong demand means that you can sell each chip for a higher price, but no additional design work is required, and you still pay the printer, TSMC, the same amount of money. Consequently, NVDA’s margins have gone up substantially: well above their 11.9% long term average to hit a peak of 33.2%, and more recently 26.5%:

NVidia Operating Margins

The question is, what would be a sensible margin going forward? Obviously 33% operating margin would attract a wall of competition and get competed away, which is why they can only be temporary.

However, NVidia has shifted to having a greater proportion of its sales coming from non-OEM and has a greater proportion of its sales coming from professional rather than gaming. As such, maybe one can be generous and say NVDA can earn an 18% average operating margin over the next cycle.

We can sense check these margins, using Intel. Intel has a long term average EBIT margin of about 25%. Intel happens to actually print the chips as well, so they collect a bigger fraction of the final product that they sell. NVDA, since it only does the design aspect, can’t earn a higher EBIT margin than Intel on average over the long term.

Tesla sales have likely gone too far and will moderate from here – perhaps down to a still more than respectable $2bn per year. Gaming resumes the long-term slide in discrete GPUs, which will likely be replaced by integrated GPUs to a greater and greater extent over time.

But let’s be generous and say it maintains $3.5 Bn Per year for the add inboard, and let’s assume we keep getting $750mn odd of Nintendo Switch revenues(despite that product being past peak of cycle, with Nintendo themselves forecasting a sales decline).

Let’s assume AMD struggles to make progress in Quadro, despite undercutting NVDA on price by 75%, with continued revenues at $1200. Add on the other 1.2Bn of Automotive, OEM and IP (I am not even counting the fact that car sales have collapsed and Automotive is likely to be down big), and we would end up with revenues of $8.65 Bn, at an average operating margin of 20% through the cycle that would have $1.75Bn of operating earnings power, and if I say that the recent Mellanox acquisition manages to earn enough to pay for all the interest on NVDAs debt, and I assume a tax rate of 15% we would have around $1.5Bn in Net income.

This company currently has a market capitalization of $209 Billion. It blows my mind that it trades on 139x what I consider to be fairly generous earnings – earnings that NVidia never even got close to seeing before the confluence of good luck hit them. But what really stuns me is the fact that investors are actually willing to extrapolate this chain of unlikely and positive events into the future.

Shockingly, Intel has a market cap of 245Bn, only 40Bn more than NVDA, but Intel’s sales and profits are 7x higher. And while Intel is facing competition from AMD, it is much more likely to hold onto those sales and profits than NVDA is. These are absolutely stunning valuation disparities.

If I didn’t see NVDA’s price, and I started from first principles and tried to calculate a prudent price for the company I would have estimated a$1.5Bn normalized profit, maybe on a 20x multiple giving them the benefit of the doubt despite heading into a huge recession and considering the fact that there is not much debt and the company is very well run. That would give you a market cap of $30Bn, and a share price of $49. And it is currently $339. Wow. Obviously Nvidia is overpriced and a short position might be in order.

On the face of the analysis presented, do you think Nvidia is over priced? Does Intel or AMD have better opportunities to survive the recession that NVidia? Let us know in the comments.

Admin

Mathew Wright Lead Forex Market Researcher

Leave a Reply