Nvidia CEO says new Rubin chips on track, helping to speed AI

Sign up now: Get ST's newsletters delivered to your inbox

Nvidia founder and CEO Jensen Huang introduces the new Rubin and Vera chips at the CES 2026 trade show on Jan 5.

Nvidia founder and CEO Jensen Huang introducing the new Rubin chips at the CES 2026 trade show on Jan 5.

PHOTO: AFP

Google Preferred Source badge

- Nvidia’s highly anticipated new Rubin data centre products are nearing release in 2026 and customers will soon be able to try out the technology, helping speed artificial intelligence (AI) development.

All six of the new Rubin chips are back from manufacturing partners, and they have already passed some of the milestone tests that show they are on track for deployment by customers, Nvidia said. Chief executive Jensen Huang touted the products during a keynote presentation at the CES 2026 trade show in Las Vegas on Jan 5. 

“The race is on for AI,” he said. “Everybody’s trying to get to the next level.” Mr Huang’s remarks signal that Nvidia is maintaining its edge as the leading maker of AI accelerators, the chips used by data centre operators to develop and run AI models.

Rubin is Nvidia’s latest accelerator and is 3.5 times better at training and five times better at running AI software than its predecessor, Blackwell, the company said. A new central processing unit has 88 cores – the key data-crunching elements – and provides twice the performance of the component that it is replacing. 

The company is giving details of its new products earlier in the year than it typically does – part of a push to keep the industry hooked on its hardware, which has underpinned an explosion in AI use. Nvidia usually dives into product details at its spring GTC event in San Jose, California. 

For Mr Huang, CES is yet another stop on his marathon run of appearances at events, where he has announced products, tie-ups and investments all aimed at adding momentum to the deployment of AI systems.

Some on Wall Street have expressed concern that competition is mounting for Nvidia – and that AI spending cannot continue at its current pace. Data centre operators also are developing their own AI accelerators. But Nvidia has maintained bullish long-term forecasts that point to a total market in the trillions of dollars.

The new hardware, which also includes networking and connectivity components, will be part of its DGX SuperPod supercomputer while also being available as individual products for customers to use in a more modular way. The step-up in performance is needed because AI has shifted to more specialised networks of models that not only sift through massive amounts of inputs, but need to solve particular problems through multistage processes. 

The company emphasised that Rubin-based systems will be cheaper to run than Blackwell versions because they will return the same results using smaller numbers of components. Microsoft and other large providers of remote computing will be among the first to deploy the new hardware in the second half of the year, Nvidia said. 

For now, the majority of spending on Nvidia-based computers is coming from the capital expenditure budgets of a handful of customers, including Microsoft, Alphabet’s Google Cloud and Amazon.com’s AWS. Nvidia is pushing software and hardware aimed at broadening the adoption of AI across the economy, including robotics, healthcare and heavy industry.

As part of that effort, Nvidia announced a group of tools designed to accelerate development of autonomous vehicles and robots. BLOOMBERG

See more on