The "X − 1" problem
Per Dylan: the AI labs know they need X. Nvidia is building X − 1. The supply chain below Nvidia is building X − 2 or X ÷ 2. Each level is less AGI-pilled than the one above, and the whip takes years to reach ASML and the memory makers.
By the time the bottom of the chain wakes up, demand has already raced ahead by another order of magnitude.
Who's most AGI-pilled (April 2026)
OpenAI & Anthropic: maximally pilled. Sign anything to lock compute. (Apr 24: Anthropic just locked in $40B / 5 GW from Google + $5B / 5 GW from Amazon in a single week; OpenAI now telling shareholders it targets 30 GW of dedicated compute by 2030.)
Nvidia: very pilled — Jensen invests $30B in OpenAI, $10B in Anthropic.
Google: just woke up after Gemini 3 ARR mooned to $5B in Q4. Now buying energy companies, putting deposits on turbines, and (Apr 24) committing 5 GW of TPU capacity to Anthropic.
TSMC, ASML, memory makers: still cautious. Project Dylan's numbers as "way too high." Wrong every quarter — though SK Hynix's Q1 (Apr 23) finally admits HBM demand outruns supply for years.