Honeydogs
Back to Tech

NVIDIA's AI Monopoly Dies in 2025: Here's Who Profits

NVIDIA's 90% AI chip margins attracted every tech giant with a fab. The H200 shortage created its own competition. Jensen's leather jacket can't save him now.

6 min read
AINVIDIAsemiconductorsAMDchipshardware

The $2 Trillion Party Is Ending

TL;DR: NVIDIA controlled 92% of AI training chips with 78% gross margins. Then AMD, Google, Amazon, and even Intel crashed the party. The gold rush for chip sellers is over. The real money moves to chip users.

March 2024: NVIDIA hits $2 trillion market cap. Jensen Huang becomes the leather jacket-wearing prophet of the AI age. Every startup begging for H100 allocations like medieval peasants seeking grain.

July 2025: AMD's MI300X beats H200 on price/performance. Google's TPU v5 runs twice as fast for half the cost. Amazon's Trainium2 just stole Netflix's training workload. NVIDIA's stock is down 30% from its peak.

The monopoly isn't just cracking. It's shattering. And the shards are worth billions.

How NVIDIA Built a Money Printer

Let's understand the scam—sorry, "business model"—that made NVIDIA worth more than the entire S&P Energy sector:

Step 1: Build good GPUs for gaming (2000-2016) Step 2: Accidentally discover they're perfect for AI (2017) Step 3: Create CUDA, lock everyone in (2018-2020) Step 4: AI boom hits, become the only supplier (2021-2023) Step 5: Charge whatever you want (2023-2025)

The H100 costs $3,000 to manufacture. NVIDIA sells it for $30,000. That's a 90% gross margin. For context, Apple's iPhone—the most profitable consumer product in history—has 38% margins.

But here's the beautiful part: They created their own shortage. NVIDIA could have tripled production. They didn't. Why sell 3x units at normal prices when you can sell 1x at luxury prices?

The Competition Woke Up and Chose Violence

Every tech giant watched NVIDIA print money and thought: "Wait, we know how to make chips too."

AMD: The Fast Follower

AMD's MI300X specs:

  • 192GB HBM3 memory (vs H100's 80GB)
  • 5.3 TB/s bandwidth (vs H100's 3.35 TB/s)
  • Price: $15,000 (vs H100's $30,000)

Microsoft bought 100,000 units. Meta ordered 50,000. OpenAI is testing them. The kicker? ROCm 6.0 finally doesn't suck. It's not CUDA, but it's good enough.

Google: The Silent Assassin

Google's been building TPUs since 2015. Everyone laughed. Nobody's laughing now.

TPU v5e stats:

  • 2x faster than H100 for transformer models
  • 60% cheaper per FLOP
  • Integrated with every Google Cloud service
  • Already training Gemini 2

Google doesn't sell TPUs. They rent them. It's the drug dealer model: First hit's cheap, then you're locked into GCP forever.

Amazon: The Infrastructure Play

AWS Trainium2:

  • 30% faster than H100 for training
  • 50% cheaper on EC2
  • Integrated with SageMaker
  • Anthropic is all-in

Amazon's play is different. They're not competing on raw performance. They're competing on "it's already in your AWS account."

Intel: The Zombie That Won't Die

Everyone wrote off Intel. Then Gaudi 3 benchmarks leaked:

  • Matches H100 on most workloads
  • $8,000 price point
  • x86 integration advantage
  • Power efficiency lead

Intel might be late, but they're not wrong. The industry wants a third option.

The CUDA Lock-in Is Breaking

NVIDIA's real moat wasn't hardware—it was CUDA. Every AI framework, every model, every tutorial assumed CUDA. It was the Windows of AI.

But monopolies create their own disruption:

OpenAI Triton: Python-like language that compiles to any hardware

PyTorch 2.0: Native support for non-NVIDIA backends

JAX: Google's framework that treats all accelerators equally

ONNX Runtime: Microsoft's "write once, run anywhere" for AI

Apache TVM: Optimizing compiler for everything

The translation layers are getting good. Really good. Running PyTorch on AMD MI300X is now 89% as efficient as native CUDA. That 11% gap isn't worth 2x the price.

The Chinese Wild Card

While everyone watches the US players, China is building an alternative universe:

Huawei Ascend 910B: Matches A100 performance despite sanctions

Baidu Kunlun 3: 30% faster than H100 on Chinese models

Alibaba's Hanguang 800: Custom RISC-V architecture

They can't buy NVIDIA chips due to export controls. So they built their own. And they're selling them to everyone the US won't sell to—which is half the world.

The Inference Revolution Nobody Sees Coming

Everyone's obsessed with training. But inference—actually running the models—is where the real volume is.

Training GPT-4: One time, $100 million Running GPT-4: Millions of times daily, forever

And inference doesn't need H100s. It needs:

  • Low latency
  • Power efficiency
  • Edge deployment
  • Cost optimization

Guess who's winning inference?

Qualcomm's Cloud AI 100: 5x power efficiency Apple's M4: Runs Llama 70B locally Tesla's Dojo: Custom silicon for specific workloads Groq's LPU: 10x faster inference than GPU

NVIDIA doesn't even compete in most of these markets.

The Bloodbath Timeline

Q3 2025: AMD takes 20% market share in new deployments

Q4 2025: Google announces TPU v6, 3x performance jump

Q1 2026: Apple enters server chip market with M4 Ultra datacenter

Q2 2026: NVIDIA margins compress to 60% (still insane, but not monopoly insane)

Q3 2026: China achieves chip independence, floods Asian markets

Q4 2026: Commodity AI chips hit $1,000 price point

Who's Getting Rich from NVIDIA's Fall

Winners:

Cloud Providers: AWS, Google Cloud, Azure lock in customers with proprietary chips

AMD: From 8% to 30% market share in 18 months

TSMC: Everyone needs them to manufacture, pricing power increases

AI Startups: 70% reduction in compute costs changes unit economics

Open Source: Chip diversity forces framework portability

Losers:

NVIDIA: Still profitable, but not "worth more than Europe" profitable

Late-stage AI VCs: Portfolio companies' moats evaporate with cheap compute

CUDA-dependent companies: Technical debt becomes existential crisis

Jensen's Jacket Supplier: Significantly less leather required

The Real Story: Software Eats Hardware (Again)

Here's what everyone misses: Chips are becoming commodity.

The same thing happened with:

  • CPUs (Intel monopoly → ARM everywhere)
  • Storage (EMC → cloud storage)
  • Networking (Cisco → white box)
  • Servers (Sun → Linux)

The pattern is always:

  1. Hardware company achieves monopoly
  2. Charges monopoly rents
  3. Software abstracts away the hardware
  4. Commodity providers enter
  5. Margins collapse
  6. Value moves up the stack

We're at step 4. NVIDIA is about to learn what Intel learned: In tech, monopolies are temporary.

The Bottom Line

NVIDIA built the perfect monopoly at the perfect time. They deserve credit for seeing the AI wave before everyone else. But they got greedy. $30,000 for a chip that costs $3,000 to make? That's not a business model—it's an invitation for competition.

The AI revolution needs picks and shovels. NVIDIA was the only store in town selling them at 10x markup. Now there's a Home Depot, a Lowe's, and some guy on Craigslist selling "slightly used" shovels that work just fine.

Jensen Huang was right: "The more you buy, the more you save." He just didn't mention you'd be saving by buying from his competitors.

In 12 months, we'll look back at paying $30,000 for an H100 the same way we look at paying $50,000 for a 1GB hard drive in 1990. Necessary at the time, insane in retrospect.

The hardware war is over. The software war is beginning. And NVIDIA doesn't make software.

[Follow @honeydogs for more monopoly obituaries]