Nvidia AI chip revenue could reach “at least” $1 trillion through 2027, CEO Jensen Huang said Monday at the company’s GTC 2026 conference in San Jose. The forecast covers sales of Nvidia’s current Blackwell chips and its next generation Vera Rubin platform. Huang added he expects compute demand to exceed that level.
Why the forecast matters for markets
Huang’s timeline places Nvidia’s newest chip cycle at the center of investor expectations for AI infrastructure spending. Nvidia supplies the processors that underpin large AI systems used across consumer and enterprise software. A $1 trillion run-rate signal through 2027 frames Nvidia less as a single product story and more as the toll collector for AI buildouts.
Huang referenced his prior comment from October 2025 that Nvidia had $500 billion in AI chip orders through 2026. Monday’s remarks extend the horizon and raise the implied scale of demand that buyers expect to deploy in data centers.
Inference becomes the new demand engine
Nvidia is pitching a shift in the AI economy from training models to operating them at scale. That phase is known as inference, where models generate outputs and perform tasks continuously. Huang described this as the moment AI starts doing “productive work.” If inference volumes surge, customers need more chips, more networking, and more capacity per deployed model.
Huang also pointed to internal adoption as a signal of the shift. He said Nvidia’s software engineers are using AI coding assistants, including Anthropic’s Claude Code and OpenAI’s Codex. The message is that AI tools are moving from demos to daily workflows, which can pull demand forward for faster and more complex hardware.
What investors will watch next
Markets will likely focus on three checkpoints. The first is whether Blackwell deliveries stay on schedule. The second is how quickly Vera Rubin ramps. The third is whether customers keep expanding budgets as inference workloads grow. If those three line up, Huang’s $1 trillion target becomes a baseline, not a ceiling.

