Meta deepens Nvidia ties with pact for ‘millions’ of AI chips

Sign up now: Get ST's newsletters delivered to your inbox

The pact reaffirms Meta’s loyalty to Nvidia at a time when the AI landscape is shifting.

Nvidia’s AI accelerators, the chips that help develop and run AI models, fetch an average of about US$16,060 (S$20,300) apiece, according to a recent IDC estimate.

PHOTO: REUTERS

Google Preferred Source badge

  Meta Platforms has agreed to deploy “millions” of Nvidia processors over the next few years, tightening an already close relationship between two of the biggest companies in the artificial intelligence (AI) industry.

Meta, which accounts for about 9 per cent of Nvidia’s revenue, is committing to use more AI processors and networking equipment from the supplier, according to a statement on Feb 17. For the first time, it also plans to rely on Nvidia’s Grace central processing units, or CPUs, at the heart of standalone computers.

The roll-out will include products based on Nvidia’s current Blackwell generation and the forthcoming Vera Rubin design of AI accelerators.

“We’re excited to expand our partnership with Nvidia to build leading-edge clusters using its Vera Rubin platform to deliver personal superintelligence to everyone in the world,” Meta CEO Mark Zuckerberg said in the statement.

The pact reaffirms Meta’s loyalty to Nvidia at a time when the AI landscape is shifting. Nvidia’s systems are still considered the gold standard for AI infrastructure – and generate hundreds of billions of dollars in revenue for the chipmaker. But rivals are now offering alternatives, and Meta is working on building its own in-house components.

Shares of Nvidia and Meta both rose about 1 per cent in late trading on Feb 17 after the deal was announced. Advanced Micro Devices, Nvidia’s rival in AI processors, fell around 3 per cent. 

Nvidia’s AI accelerators, the chips that help develop and run AI models, fetch an average of about US$16,060 (S$20,300) apiece, according to a recent IDC estimate. This means a million of the chips would cost more than US$16 billion – and that does not account for the higher price of newer versions or the other Nvidia equipment that Meta is buying.

But Meta was already the second-largest buyer of Nvidia products. It accounted for a total of about US$19 billion in the last fiscal year, according to data compiled by Bloomberg.

Dr Ian Buck, Nvidia’s vice-president of accelerated computing, said the two companies are not putting a dollar figure on the latest commitment or laying out a timeline. 

Dr Buck argues that only Nvidia is able to offer the breadth of components, systems and software that a company wishing to be a leader in AI needs. Still, it is reasonable for Meta and others to test out other alternatives, he said.

Mr Zuckerberg, meanwhile, has made AI the top priority at Meta, pledging to spend hundreds of billions of dollars to build the infrastructure needed to compete in this new era.

Meta has already projected record spending for 2026, with Mr Zuckerberg saying in 2025 that the company will put US$600 billion towards US infrastructure projects over the next three years. Meta is building several gigawatt-size data centres around the country, including in Louisiana, Ohio and Indiana. One gigawatt is roughly the amount of energy needed to power 750,000 homes. 

Dr Buck stressed that Meta will be the first large data centre operator to use Nvidia’s CPUs in standalone servers. Typically, Nvidia offers this technology in combination with its high-end AI accelerators – chips that owe their lineage to graphics processors. 

This shift represents an encroachment into territory dominated by Intel and AMD. It also provides an alternative to some of the in-house chips designed by large data centre operators, such as Amazon.com’s Amazon Web Services. 

Dr Buck said the uses for such chips are only growing. Meta, owner of Facebook and Instagram, will use the chips itself and also rely on Nvidia-based computing capacity offered by other companies. 

Nvidia CPUs will be increasingly used for tasks such as data manipulation and machine learning, he added.

"There are many different kinds of workloads for CPUs. What we’ve found is Grace is an excellent back-end data centre CPU,” said Dr Buck, meaning it handles the behind-the-scenes computing tasks. 

“It can actually deliver two times the performance per watt on those back-end workloads,” he said. BLOOMBERG

See more on