Nvidia's CEO: Next-Gen AI Will Devour More Chips Than Ever

Nvidia's CEO: Next-Gen AI Will Devour More Chips Than Ever
Credit: CNBC

Jensen Huang just dropped a bombshell about artificial intelligence: it's getting really hungry. The Nvidia CEO says next-generation AI models need 100 times more computing power than their predecessors. Why? Because they're learning to think step by step, just like your high school math teacher always wanted.

Huang delivered this insight after Nvidia posted another quarter of eye-popping numbers. The chip giant's revenue jumped 78% to $39.33 billion, with its data center business now making up more than 90% of total revenue. If AI were a restaurant, Nvidia would be both the chef and the landlord.

The company's latest star, the GB200 chip, processes AI content 60 times faster than the versions Nvidia can sell to China under export restrictions. It's like comparing a sports car to a bicycle, except both vehicles cost more than your house.

Speaking of China, Nvidia's relationship with the world's second-largest economy has gotten complicated. Export controls from the Biden administration have cut Nvidia's Chinese revenue in half. But Huang isn't too worried about the long term. He believes software developers will find ways around these restrictions, comparing it to water finding its way downhill.

The timing of Huang's comments about increased computing needs is particularly interesting. DeepSeek’s approach raised fears that companies could train AI models efficiently with fewer Nvidia chips, potentially disrupting Nvidia’s dominance in AI hardware. This idea helped knock 17% off Nvidia's stock value on January 27, its worst drop since 2020.

But Huang turned this apparent threat into an opportunity. He praised DeepSeek for open-sourcing a "world class" reasoning model. The twist? These very reasoning models are what demand all that extra computing power. It's like DeepSeek invented a new type of fuel-hungry engine while trying to promote energy efficiency.

The push for more sophisticated AI reasoning isn't limited to DeepSeek. Huang pointed to OpenAI's GPT-4 and xAI's Grok 3 as examples of models that think through problems step by step. These AIs don't just pattern match anymore - they try to reason about the best way to answer questions, much like a human would pause to consider different approaches.

This shift represents a fundamental change in how AI operates. Earlier models were like savants who could instantly recognize patterns but couldn't explain their thinking. New models are more like methodical problem solvers who show their work. The trade-off? They need vastly more computing power to support this deliberative process.

For the tech giants who make up Nvidia's customer base, this creates an interesting dilemma. They're already spending billions annually on AI infrastructure. If next-generation models really do need 100 times more computing power, they're facing some eye-watering budget meetings in their future.

Nvidia itself seems well-positioned to benefit from this trend. The company has seen its revenue more than double for five straight quarters through mid-2024, with only a slight deceleration recently. Its dominance in AI chips means that when tech companies need more computing power, they usually end up at Nvidia's door.

The China situation adds another layer of complexity. While export restrictions have hurt Nvidia's Chinese revenue, Huang's confidence in software workarounds suggests he sees this as a speed bump rather than a roadblock. His comment that "software finds a way" carries a hint of Silicon Valley's characteristic optimism about technical solutions to political problems.

Why this matters:

  • The AI arms race is entering a new phase where raw computing power matters more than ever - and the cost of admission just went up by two orders of magnitude
  • While everyone's focused on what AI can do, the real story might be how much electricity and silicon it's going to consume doing it

Read on, my dear: