Nvidia is preparing for the AI-powered data center boom

Listen to the article 3 min. This audio is automatically generated. Please let us know if you have any feedback. Dive Brief: Adoption of Generative AI led to strong growth in Nvidia’s data center revenue, which hit a record $4.28 billion in Q1 2024 for the period ended April 30. Data center revenue rose 18% sequentially and 14% year-on-year, according to a conference call Wednesday. According to Colette Kress, the company’s EVP and CFO, the new technology will require higher compute requirements compared to traditional enterprise workloads, indicating further demand. Kress envisions a 10-year transition in which the world’s data centers are reclaimed or recycled and scaled out as accelerated computing. “There’s going to be a pretty dramatic shift in data center spending away from traditional computing to accelerated computing with Smart.” [network interface cards]smart switches, GPUs of course, and the workload will be mostly generative AI,” she said. Dive Insight:

Data centers have traditionally relied on central processing units to perform common calculations. But generative AI models require more computing power. This can be achieved with graphics processing units and tensor processing units.

“If you need big processing power or advanced features, chances are a CPU-only approach won’t work for you,” said Chirag Dekate, VP Analyst at Gartner.

CIOs and technology leaders of an individual company with a cloud infrastructure don’t necessarily have to think about chip-level technology because that responsibility rests with cloud providers, Dekate said. But over time that could change.

According to Mark Tauschek, VP, a renowned analyst and research fellow at Info-Tech Research Group, enterprise IT leaders could turn to cloud providers for early pilot initiatives. If these pilot initiatives prove valuable, some large companies in certain industries will likely resort to accelerated on-premises data centers.

READ :  HPC inauguration marks milestone in national collaboration Labmate Online

The big three cloud providers have made efforts to stay ahead of this shift by building data centers, redistributing workloads to accommodate AI compute needs, collaborating with Nvidia for hardware, or working on proprietary chips .

“We have competition from every direction,” Nvidia CEO and president Jensen Huang said during the earnings call, according to a transcript of Seeking Alpha. The company is aware of competition from existing semiconductor companies, well-funded startups and cloud providers with internal projects, Huang said.

However, according to Dekate, the likelihood of data centers morphing into exclusively GPU- and TPU-dense environments is slim.

“Trying to run GPUs for every available workload is like killing a mosquito with a bazooka,” Dekate said. “It doesn’t make any sense at all.”