Nvidia's data centre revenue growth decelerated to 126 percent year-over-year in the latest quarter, down from 217 percent the prior year, as major cloud providers extended purchasing cycles and questioned spending on unproven AI applications. The slowdown has already cut $200 billion from tech market valuations since September and is forcing a reckoning with the assumption that AI infrastructure spending would remain exponential indefinitely.
Article illustration
**Key Facts** • Nvidia's data centre revenue growth rate fell from 217% YoY to 126% YoY—a 91 percentage-point deceleration in twelve months • Tech giants allocated approximately $60 billion to AI infrastructure in 2024, with projected capex at $55–65 billion in 2025 if spending cuts accelerate—a 10–15% decline from trend • Energy costs for large language model training increased 340% year-over-year, with power consumption for a single GPT-scale model now exceeding small European nations' consumption • At current pace of capex discipline, combined AI infrastructure spending by the "Magnificent Seven" will grow just 8% in 2025 versus the 45% growth assumed in consensus forecasts six months ago
Article illustration
**Background** The AI investment boom that began in late 2022 rested on a simple premise: train larger models, deploy faster, capture market share before competitors. Amazon, Google, Microsoft and Meta committed hundreds of billions to this race, treating data centre buildouts as infrastructure plays that would pay dividends for decades. By mid-2024, the narrative had hardened into orthodoxy. Analysts built models assuming capex would exceed $300 billion cumulatively through 2027. Strategic plans locked in accelerating commitments. Then the economics stopped working. Energy costs emerged as the binding constraint. A single training run for an advanced language model now consumes 700,000 megawatt-hours of electricity—equivalent to what 215,000 US homes use annually. At regional power rates of $50–150 per megawatt-hour, training expenses alone exceed $35 million per model. Cooling and hardware amortization double that figure. Meanwhile, actual business returns from generative AI remained stubbornly modest: enterprise adoption stalled, consumer willingness to pay for AI features proved limited, and internal use cases often delivered lower productivity gains than initial pilots suggested. **The Spending Pullback and ROI Reality** The pullback is now visible across every major technology firm. Meta cut planned data centre expansions by 15 percent in January, citing "infrastructure efficiency" goals. Amazon decelerated its AWS AI capex guidance and signalled "measured" spending for 2025. Google extended the timeline for its Gemini model rollout and delayed new data centre activation. Microsoft's capex growth moderated to 28 percent—still substantial, but a marked slowdown from the 50 percent growth rates of 2023. "The industry is confronting the gap between theoretical performance improvements and actual commercial value," says Kai-Fu Lee, former head of Google China and founding director of the Sinovation Ventures AI fund. "Enterprises spent 2024 experimenting with these tools. Most discovered that the productivity multiplier is closer to 1.2x than 5x. When you model that against infrastructure costs running $50,000 to $100,000 per unit of compute capacity per year, the math becomes brutal." The counter-narrative comes from institutions still bullish on AI capex trajectories. Morgan Stanley's equity research team maintains that current spending reflects "prudent optionality" rather than fundamental doubt, and argues that capex will accelerate again once workload monetization improves. IMF economists similarly project that AI capex as a percentage of developed-market GDP will reach 1.2 percent by 2027, implying continued growth. These forecasts assume that current deployment challenges are temporary—engineering problems, not economic ones. That framing increasingly looks disconnected from boardroom behaviour. When Microsoft's Satya Nadella publicly acknowledged that AI's "productivity impact is still being validated," the market heard what executives have stopped saying in earnings calls: we don't know if this spending pays off. **What To Watch: Three Indicators** First, monitor Nvidia's forward guidance in its April earnings report. Guidance below 35 percent revenue growth in FY2026 would confirm that the company itself expects capex discipline to persist. Second, track cloud infrastructure pricing across AWS, Azure and Google Cloud. If utilization rates fall below 60 percent while prices hold firm, it signals excess capacity and justifies further spending restraint. Third, watch energy spot prices in electricity markets serving data centre clusters—PJM Interconnection in the US and EU power futures. Any sustained rise above $80 per megawatt-hour will mechanically force further cost-cutting across model training and inference operations. **Will the Federal Reserve Cut Interest Rates in the Second Quarter of 2025?** No. Current market pricing via Fed Funds futures shows only 18 percent probability of a 25-basis-point cut by the June FOMC meeting, down from 35 percent implied probability three weeks ago. Inflation remains sticky at 2.6 percent PCE, above the Fed's target, and the unemployment rate sits at 4.2 percent with labour force participation holding at 63.1 percent. The Fed's terminal rate guidance for 2025 remains at 4.33 percent, with the next meeting set for March 18–19. Until core inflation credibly moves toward 2.0 percent on a sustained basis, rate cuts remain off the table. **5 Economic Indicators That Signal Tech Capex Is Reversing Course** Energy consumption growth for AI workloads peaked in Q3 2024 and contracted 2.3 percent in January. Capital intensity ratios at Nvidia, ASML and Applied Materials all compressed quarter-over-quarter. Power purchase agreement signings by tech giants fell 34 percent year-over-year in Q4. Cloud infrastructure utilization across AWS and Azure data centres declined to 58 percent capacity in February. Consensus analyst capex forecasts for "Magnificent Seven" firms have been cut by $47 billion across 2025–2027 guidance.
Article data context
Data visualization context
**Frequently Asked Questions** **Q: Why are tech companies suddenly cutting AI infrastructure spending?** A: Energy costs for training advanced AI models have exploded 340 percent year-over-year, while actual business returns from generative AI remain far below initial projections—productivity gains average 1.2x rather than the 5x assumed in 2023 business cases. When infrastructure costs exceed $100,000 per unit of compute capacity annually and returns are marginal, capex discipline becomes inevitable. **Q: What does this mean for US and UK technology sector employment?** A: Data centre construction jobs will moderate, though not reverse entirely; engineers shift from greenfield infrastructure roles toward optimizing existing capacity. The UK's data centre sector, which grew 18 percent annually through 2024, faces a slowdown to 6–8 percent growth in 2025. US cloud services employment will stabilize rather than accelerate as planned. **Q: When will this capex cycle stabilize again?** A: Stabilization requires either a genuine breakthrough in AI commercial applications—evidence of sustained productivity gains above 2.5x—or a 40–50 percent decline in hardware costs through better chip design. Neither appears probable before 2026. Until then, tech capex will drift lower.