Nvidia Stock History: From Gaming Chip to AI Trillion (NVDA)

HALL OF FAME · NVDA

Nvidia Stock History: From Gaming Chip to AI Trillion (NVDA)

How a graphics-card maker became the most valuable stock on earth.

1999-01-22
IPO date
$140+ (2025, split-adjusted)
All-time high
36.0%
CAGR
$3.5M
$1,000 worth

Key milestones

1993
Jensen Huang, Chris Malachowsky, Curtis Priem found Nvidia in a Denny's restaurant.
1999-01-22
IPO at $12; the GeForce 256 launches as the world's first GPU.
2006
CUDA toolkit released — turning GPUs into general-purpose parallel processors.
2012
AlexNet wins ImageNet on Nvidia GPUs — start of the deep-learning era.
2016
P100 datacenter GPU launches; Tesla, OpenAI, Google start buying.
2019
Mellanox acquired for $6.9B — networking added to the AI stack.
2022
H100 (Hopper) launches just in time for the ChatGPT boom.
2023
AI capex tsunami: data center revenue from $15B to $47B in one year.
2024-06
Briefly the most valuable listed company in the world ($3.3T+).
2025
Blackwell architecture (B100/B200) ships; data-center revenue >$100B annualized.

The Story

Nvidia was founded in 1993 in a Denny’s restaurant in San Jose. Three engineers — Jensen Huang, Chris Malachowsky, and Curtis Priem — wanted to bring 3D graphics to the PC. For the first ten years Nvidia was a typical hardware vendor in the brutal graphics-chip market. Competitors like 3dfx, S3, and ATI either died or got acquired; Nvidia survived through a 6-month product cycle that kept pace so high that competitors fell off.

The January-1999 IPO at $12 was unspectacular. Through 2006 Nvidia was a pure gaming-GPU vendor. Then came the decisive strategic move: Huang stuck with the CUDA toolkit — a programming layer that exposed GPUs as general-purpose parallel processors. CUDA was a money-loser for years. Scientists and a few quant hedge funds used it; the broad market didn’t. But Huang kept investing billions, because his thesis was: if algorithms ever go data-parallel, CUDA becomes the software standard.

The thesis was confirmed in 2012. Geoffrey Hinton’s AlexNet won the ImageNet image-recognition contest by a factor of 10 — on two Nvidia GPUs. That was the starting gun of the deep-learning era. The P100 in 2016 became the first real datacenter GPU; in 2017 Google, Tesla, and OpenAI placed their first big orders. The H100 (Hopper architecture) launched in 2022 — and then ChatGPT dropped on the world in November 2022. Within 12 months Nvidia’s datacenter revenue tripled; within 18 months Nvidia briefly became the most valuable listed company in the world.

What got it into the Hall of Fame

Nvidia’s moat is not primarily the GPU silicon — competitors like AMD and Intel can build physically similar chips. The moat is CUDA: 18 years of continuously maintained software libraries with 250+ frameworks that have trained essentially every serious ML engineer on the planet. People who want to train a model fast write CUDA. AMD ROCm and Intel oneAPI are technically capable — but the installed base of code, tutorials, Stack Overflow answers, and the fact that every new intern learns CUDA, gives Nvidia a lock-in unimaginable outside software.

The second factor is vertical integration. Nvidia delivers not just a chip but a full system: GPU + networking (Mellanox 2019) + software stack + DGX turn-key systems + cloud access via partners. Anyone building a GPT-class training cluster is de facto buying a Nvidia solution. That’s not a product sale; it’s a platform sale, at gross margins above 70%.

Third: the founder-CEO mentality. Jensen Huang has been CEO since 1993, uninterrupted. He holds 3.5% of the stock and stuck to a 30-year strategy — including the years CUDA didn’t earn money. Huang embodies the Hall-of-Fame pattern: owner-CEO who waited through long stretches of apparent stagnation for an asymmetric payoff. The ChatGPT moment wasn’t luck; it was 18 years of calculated preparation.

Where things stand in 2026

Nvidia trades in May 2026 in an almost unsustainably good position. The hyperscaler capex cycle (Microsoft, Google, Meta, Amazon, Oracle) has reached a $350B+/year pace — most of it flowing to Nvidia. Blackwell architecture (B100/B200) is ramping; every large model-training cluster on earth runs on these chips. Datacenter revenue is approaching a $150B annual run rate. The 35x P/E doesn’t look extreme, because free-cash-flow has caught up.

Risks: first, concentration — more than 50% of revenue comes from 5 hyperscaler customers. If they slow capex or develop their own silicon (Google TPU, Amazon Trainium, Meta MTIA), margins erode. Second, geopolitics — China export restrictions cap roughly 25% of TAM. Third, the AI hype cycle itself: if monetization of AI applications lags expectations, hyperscalers may pull capex. The Hall-of-Fame position is secured; the open question is whether Nvidia delivers another 36% CAGR through the 2030s or transitions into mature-compounder mode.

Investor takeaways

Three lessons. First: software lock-in beats hardware lead. Nvidia’s CUDA was the real asset for 15 years before it showed up in the stock price. Second: founder-CEOs with long-horizon investment pay off. Anyone who wrote off Huang in 2010 because “gaming GPUs are a cyclical business” missed a 1,000x rise. Third: asymmetric payoffs follow long stretches of apparent stagnation. Nvidia traded at 15-25x P/E from 2015 to 2022 — at a firm that already controlled the next platform. Patience here is not just a virtue; it is an investment strategy.

Sources

  1. Nvidia Investor Relations
  2. SEC EDGAR — Nvidia 10-K
  3. Yahoo Finance — NVDA historical
  4. Wikipedia — Nvidia
Disclaimer: This article is for historical and educational purposes only. It is not investment advice. Returns are approximations; past performance is not indicative of future returns. Trading and investing carry risk.
Scroll to Top