The AI Chip Supply Chain

One continuous chain from ASML's EUV machines to the energy that runs the data centers. Click any layer to dive deeper.

Jensen Huang × Dwarkesh, Apr 15 2026 Dylan Patel (SemiAnalysis) × Dwarkesh, Mar 2026 Apr 29 2026 update: SK Hynix / Intel / TSMC / ASML Q1 earnings

Tip: Click any layer to expand. The colored spine on the left lights up at active bottlenecks. Memory (#9) and ASML equipment (#12) have the most material new info from Dylan Patel; the Energy panel (#0) now shows where Jensen and Dylan disagree. Updated Apr 29: SK Hynix says HBM4 demand exceeds 3-year supply; TSMC defers high-NA EUV adoption to A10 (~2029); Google commits $40B + 5 GW to Anthropic; Samsung joins SK Hynix as a Vera Rubin HBM4 supplier.
Bottleneck Constrained Healthy Software Foundation
How the bottleneck has shifted
2022–2023Was
ResolvedCoWoS packaging. TSMC capacity for chip-on-wafer packaging was the binding constraint. Nvidia + TSMC "doubled, doubled, doubled" until it caught up.
2024–2025Was
ResolvedPower & data centers. Permitting, transmission, gigawatt-scale cooling. Solved through behind-the-meter gas, neoclouds, and Texas regulatory speed.
2025–2027NOW
LiveMemory (HBM). 30% of Big Tech CapEx going to memory. Memory makers stopped building fabs in 2023 — no relief until 2027–28. Smartphone volumes collapsing as a side effect. (Apr 23 update: SK Hynix says HBM4 demand exceeds supply for the next 3 years even with expanded fab investment; Yongin Phase 1 cleanroom pulled forward 3 months to Feb 2027.)
2026–2028Next
BuildingCleanroom space + memory fabs. Nowhere to put the new tools. Fabs take 2 years to build; meaningful new memory capacity only late 2027 / 2028.
2028–2030Coming
The big oneASML / EUV throughput. Even at 100 EUV tools/year by 2030, the math caps you at ~200 GW/year of AI chips. This is the constraint Sam Altman's "gigawatt-a-week" plans run into. (Apr 22 update: TSMC defers high-NA EUV adoption to A10 / 2029, lengthening ASML's high-NA ramp — base case is now low-NA EUV multi-patterning carrying the load through A14/A13.)
The AGI-pilledness gradient

The "X − 1" problem

Per Dylan: the AI labs know they need X. Nvidia is building X − 1. The supply chain below Nvidia is building X − 2 or X ÷ 2. Each level is less AGI-pilled than the one above, and the whip takes years to reach ASML and the memory makers.

By the time the bottom of the chain wakes up, demand has already raced ahead by another order of magnitude.

Who's most AGI-pilled (April 2026)

OpenAI & Anthropic: maximally pilled. Sign anything to lock compute. (Apr 24: Anthropic just locked in $40B / 5 GW from Google + $5B / 5 GW from Amazon in a single week; OpenAI now telling shareholders it targets 30 GW of dedicated compute by 2030.)

Nvidia: very pilled — Jensen invests $30B in OpenAI, $10B in Anthropic.

Google: just woke up after Gemini 3 ARR mooned to $5B in Q4. Now buying energy companies, putting deposits on turbines, and (Apr 24) committing 5 GW of TPU capacity to Anthropic.

TSMC, ASML, memory makers: still cautious. Project Dylan's numbers as "way too high." Wrong every quarter — though SK Hynix's Q1 (Apr 23) finally admits HBM demand outruns supply for years.

Fast timelines vs. slow timelines
Fast timelines → US wins
  • $1T+ in US AI infrastructure CapEx compounds before China can match
  • Anthropic + OpenAI ARR doubling in months — revenue funds the next round of compute
  • China can't distill from US frontier models if those models stop being shipped openly
  • By the time China indigenizes EUV (~2030), the US is already on a self-improving research loop
Slow timelines → China wins
  • Western supply chain is multi-country (Netherlands + Japan + Korea + Taiwan + US) — fragile
  • China's vertical integration + state-directed permitting outpaces democratic capex cycles
  • Mass smartphone collapse + memory crunch erodes US consumer goodwill toward AI
  • Huawei was first to a 7nm AI chip in 2020 — they have the talent if they get the tools

Jensen's "Five-Layer Cake" framing

The lens Jensen keeps returning to: every layer must succeed, and the U.S. shouldn't sacrifice the chip layer to protect any one model lab.

L5AI ApplicationsWhere the value gets captured. The most important layer.
L4AI ModelsFrontier labs. Where the "Mythos" debate plays out.
L3Systems & NetworkingRacks, NVLink, switches, optics — the co-design surface.
L2ChipsThe layer Jensen refuses to concede in the China debate.
L1EnergyJensen says the long-term constraint. Dylan disagrees — see Energy layer.

Changelog

2026-04-29
  • Memory (#9): SK Hynix Q1 2026 — 72% op margin, ₩52.6T revenue, HBM4 demand exceeds 3-year supply, 57% HBM market share, Yongin Phase 1 cleanroom advanced from May→Feb 2027. Samsung now mass-producing HBM4 to Nvidia for Vera Rubin (Q1 prelim, Apr 8). Micron has sold out 2026 HBM supply. (Sources: SK Hynix Q1 2026 earnings call, Apr 23; Samsung Q1 prelim, Apr 8; Micron FQ1 2026.)
  • Equipment (#12): ASML Q1 2026 sales €8.8B, raised 2026 guide to €36–40B; planning ≥80 low-NA EUV systems in 2027. But TSMC told Bloomberg (Apr 22) it will defer high-NA EUV adoption to its A10 node in ~2029, sticking with low-NA multi-patterning through A14/A13. Stock dropped 3.3% on the news.
  • Fab (#10) + Packaging (#8): TSMC Q1 2026 revenue $35.9B, gross margin 66.2%; CoWoS capacity heading to ~127K wafers/mo by year-end with 240–270K wafers/yr outsourced to Amkor + ASE/SPIL. Intel Foundry Q1 +16% to $5.4B; Apple reportedly evaluating 18A-P, Google evaluating Intel advanced packaging for TPU v8e (Apr 29).
  • Models (#2) + Cloud (#3): Google to invest up to $40B in Anthropic ($10B now + $30B milestone) + 5 GW TPU capacity over 5 years at $350B valuation (Apr 24). Amazon adds $5B / 5 GW (Apr 20). Anthropic ARR reportedly ~$30B, surpassing OpenAI. OpenAI shareholder letter targets 30 GW of dedicated compute by 2030.
  • Chip designers (#7): NVIDIA Vera Rubin VR200 specs confirmed — 50 PFLOPS NVFP4, 3.3× B300 throughput, 336B transistors, 288 GB HBM4. H2 2026 datacenter deployment. Jensen raised AI infra TAM projection to $1T through 2027.