For subscribers
Nvidia is expanding its empire
The AI boom’s biggest winner moves beyond chips.
Sign up now: Get ST's newsletters delivered to your inbox
Olaf of Disney's Frozen franchise appearing on stage with Mr Jensen Huang at Nvidia's annual developer conference.
PHOTO: AFP
In the world of tech, few events are as keenly awaited as Mr Jensen Huang’s speech at Nvidia’s annual developer conference. And at this year’s jamboree in San Jose on March 16, his talk did not disappoint.
Over two hours, the boss of the world’s most valuable company unveiled new chips, artificial intelligence models and systems for everything from space-based data centres to self-driving cars. He went on to claim that this array of new products will help Nvidia sell over US$1 trillion (S$1.28 trillion) worth of AI-related hardware in the coming years.
Among engineers, the reaction was enthusiastic. Among investors, it was more guarded.
Doubts have grown about the durability of the AI boom. And Nvidia, the biggest beneficiary of the spending surge, has become a lightning rod for those concerns. On Feb 25, the firm reported record quarterly profits and forecast strong growth. Yet, its share price fell the next day. Since peaking in October 2025, it has dropped by about 10 per cent, even as an index of American chipmakers has risen by around 8 per cent.
Such bearishness marks a change to Nvidia’s fortunes. The company’s rise has been extraordinary, even by Silicon Valley’s standards.
Its graphics processing units (GPUs), the workhorse semiconductors used by AI models, account for over two-thirds of total processing power available on the world’s AI chips. In the year to January, the firm generated US$216 billion in revenue, eight times what it made three years earlier. Net income rose almost thirtyfold, to US$120 billion. It took nearly three decades for Nvidia to reach a market value of US$1 trillion; it vaulted to US$4 trillion barely two years later. Four months after that, it briefly surpassed US$5 trillion.
How high can Nvidia climb? Much higher, if Mr Huang is to be believed.
He has claimed that the hundreds of billions of dollars spent so far on AI infrastructure are just the start and that “trillions” more will follow. What is more, Nvidia has the resources to benefit. Its operating cash flows match those of the other tech giants. The firm holds more than US$62 billion in cash, a third of it generated in the past year.
To capture the opportunity, Mr Huang plans to change Nvidia into a “foundational company” on which the AI economy rests. That means selling different types of chips and hardware, bundling products into complete AI systems, and embedding Nvidia’s technology more deeply into different industries. In short, Nvidia is becoming much more than an AI chipmaker.
Coming for the king
The transformation is needed partly because Nvidia’s success has attracted competitors. Some are conventional rivals, such as AMD, an American chipmaker that has released decent alternatives to Nvidia’s GPUs. Others are start-ups spying opportunities. New chip designs are becoming commercially viable because the need for inference (AI models answering queries) is growing, and the process places a different set of demands on chips from training. According to PitchBook, a data firm, young chip firms raised US$17 billion in 2025, more than in the previous two years combined.
But the most formidable challengers are Nvidia’s own customers. The hyperscalers – Alphabet, Amazon, Microsoft and Meta – all of which rely on vast numbers of data centres to run their business, buy huge quantities of its chips. In the latest financial year, just three of these hyperscalers accounted for more than half of Nvidia’s receivables, money owed but not yet paid.
Yet, these same firms are also designing their own processors. This can slash the cost of AI chips by more than half, while squeezing out better performance by tailoring hardware to the software that runs on it.
Souring geopolitics has encouraged rivals abroad. Since October 2022, America’s government has barred Nvidia from selling its most advanced chips to China. Sales have slowed dramatically. Bernstein, a research firm, estimates that local suppliers such as Huawei, Cambricon and MetaX could grow from less than a fifth of China’s AI-chip market in 2023 to more than nine-tenths by 2027. Mr Jay Goldberg of Seaport Research Partners, an industry analyst, notes that the threat may extend beyond China. The new rivals may not produce chips as powerful as Nvidia’s, but in some markets, “good enough” could prove good enough.
Everything, everywhere all at once
Nvidia’s response is to expand in all directions. Mr Huang has compared the AI industry to a “five-layer cake”: energy, chips, networking infrastructure, models and applications. Nvidia intends to take bites out of three of the five layers.
Having conquered the market for GPUs, the firm plans to sell different types of chips. In December, Nvidia paid US$20 billion to license technology and hire engineers from Groq, a start-up specialising in inference chips. On March 16, the company unveiled a new chip using the start-up’s technology.
It is also pushing into central processing units (CPUs), a type of general-purpose chip. This is an area long dominated by Intel, a beleaguered giant. Nvidia already builds CPUs using designs from Arm, a British firm, which are used in its AI servers. Now it plans to sell them more broadly. In February, Nvidia struck a deal with Meta to supply CPU-only servers.
Nvidia is also investing in other layers. As AI systems scale, moving data between processors has become as important as the processors themselves. The firm is betting heavily on networking equipment, the technology that links chips together. In its most recent quarter, this business generated US$11 billion in revenue, making Nvidia one of the largest players in the field.
Model-making is the third layer. Nvidia has released several families of open-source AI models. These are specialised and aimed at specific industries. That includes Alpamayo for self-driving cars, GR00T for robotics, and BioNeMo for biomedical research. They often rank highly on open-source AI leaderboards. Nvidia plans to invest billions to expand its capabilities in this layer of the stack, with a focus on models tailored to specific applications.
One reason for owning the “full stack”, as Silicon Valley calls vertical integration, is that it makes it easier to coordinate the different layers. By tightly linking chips, data centre equipment and models, the company says it can extract better performance than by optimising each part separately. Mr Huang has compared building AI systems without integration to connecting “too many cats and dogs”.
It also means Nvidia can sell its hardware in bundles. Increasingly, the company describes its products not as chips but as components of “AI factories”, its term for specialised AI data centres. Some of these factories are being sold directly to governments under the banner of “sovereign AI”, the label for state-led efforts to build domestic AI infrastructure. Revenue from sovereign AI tripled in the last fiscal year to more than US$30 billion, about 15 per cent of Nvidia’s AI sales.
The company is also trying to rely less on the hyperscalers that dominate its customer list. One approach is to push deeper into industry. In car-making, Mercedes-Benz will soon ship vehicles equipped with Nvidia’s self-driving systems. In pharmaceuticals, Eli Lilly uses Nvidia’s infrastructure and models to accelerate drug discovery.
Mr Dion Harris, an Nvidia executive, says the aim is to work more closely with end customers, such as Lilly and Mercedes, to understand their needs and shape the next wave of AI computing.
But Nvidia is not the only one to say it is working closely with clients. Such moves put the firm on a collision course with the hyperscalers, which offer similar services.
Another approach is to create demand through its investments. Nvidia-backed firms, the idea goes, are more likely to buy its chips. Thus, the chipmaker is now one of Silicon Valley’s most prolific investors. Since 2020, it has made 250 investments, committing over US$65 billion. That includes such big bets as a US$30 billion investment in OpenAI, as well as small ones on start-ups in robotics, software and AI applications.
The firm’s investments also help to secure its supply chain. This March, Nvidia put more than US$4 billion into companies developing optical interconnects, which use light to transfer data rather than wires. Most AI data centres still rely on copper cables to link their equipment. Nvidia’s bet suggests it expects optical connections to become increasingly important. Mr Ben Bajarin of Creative Strategies, a consultancy, compares the strategy to Apple’s early moves to corner components for the iPod.
Nvidia is using its cash pile to strengthen other parts of its supply chain. The semiconductor industry is prone to shortages when demand surges. Supplies of advanced memory – critical for AI chips – are already sold out for this year and much of next. Nvidia bought most of the memory it will need this year, and part of next well in advance.
None of this ensures Nvidia’s continued dominance. Rivals may erode its margins. The industry’s shift from training models to running them may favour chips from other vendors. And if AI spending cools, sales could slow sharply. But for now, the champion of the AI age remains dominant – and seems intent on expanding its empire. © 2026 THE ECONOMIST NEWSPAPER LIMITED. ALL RIGHTS RESERVED.


