NVIDIA CFO Discusses Data Center Growth, AI Infra, and Product Roadmap at GS Communacopia + Tech Conf 2025
Key Takeaways
TL;DR: NVDA CFO Colette Kress reaffirmed conviction in outsized AI infra growth—global DC capex could hit $3T–$4T by 2030. Key drivers: annual product cadence (Blackwell, then Vera Rubin), sustained perf/Watt/TCO leadership, and expanding SW/system platforms. GM expected to recover to mid-70% by YE, with China upside ($2–5B) possible pending geopolitics. Addressed ASIC comp, supply scaling, and shift to “AI factories” for reasoning/agentic workloads—positioning NVDA as the critical enabler for the next industry leg.
1. AI Infra Spend & Mkt Outlook
- $3T–$4T DC Capex by 2030: Kress contextualized the figure as a “new computing platform for decades.” Driven by accelerated compute + sovereign AI globally, not just US.
- “Our focus is long-term... $3T opp., global view, sovereign AI is key.”
- Capex Already Surging: “CSPs have doubled capex vs. two yrs ago.”
- Industry-Wide Shift: Growth from CSPs, but also "AI labs, AI factories"—over $100T of addressable inds.
2. DC Segment Drivers
- Strong Seq. Growth: Q2 DC rev. +12% QoQ (excl. China H20), Q3 guide +17% QoQ. Strength in compute + networking.
- Seamless Blackwell Transition: “Seamless transition… GB200 and GB300 Ultra both moving well.”
- Networking Outperformance: “InfiniBand nearly doubling seq.,” strong attach for Ethernet for AI and NVLink. Networking is a lead indicator for future compute: “Networking arrives before compute ships.”
- China/H20 Upside: License approved for key customers, awaiting geopolitics. H20 could add $2–5B rev. near-term.
3. Product Roadmap & Tech Edge
- Annual Cadence—Moat Expands: “One-year cadence going well... customers keep innovating fast.”
- Blackwell & Vera Rubin:
- Blackwell (GB200/300 Ultra) shipping at scale, NVLink (5th gen) enables rack-scale (72 GPUs).
- Vera Rubin (6 chip variants, all taped out) positions for scale-out/across; customers already planning multi-GW DCs pre-launch.
- NVLink Fusion & PCIe: NVLink is a moat; NVLink Fusion could expand ecosystem: “Interest in adding other chips... more to come.”
4. Comp. Position & Workloads
- ASICs vs. NVDA: Kress downplayed ASIC risk, citing platform efficiency—“most performant per watt/$.” Dual lead in training + inference as agentic/reasoning models rise.