Google, sleeping giant in global AI race, now ‘fully awake’

Sign up now: Get ST's newsletters delivered to your inbox

A report said Meta planned to use Google’s chips in its data centres in 2027.

A report said Meta planned to use Google’s chips in its data centres in 2027.

PHOTO: REUTERS

Follow topic:

LONDON – Since the launch of ChatGPT three years ago, analysts and technologists – even a Google engineer and the company’s former chief executive officer (CEO) – have declared Google is behind in the high-stakes race to develop artificial intelligence (AI).

Not anymore.

The internet giant has released new AI software and struck deals, such as a chip tie-up with Anthropic PBC, that have reassured investors the company will not easily lose to ChatGPT creator OpenAI and other rivals.

Google’s newest multipurpose model, Gemini 3, won immediate praise for its capabilities in reasoning and coding, as well as niche tasks that have tripped up AI chatbots.

Google’s cloud business, once an also-ran, is growing steadily, thanks in part to the global rush to develop AI services and demand for compute.

There are signs of rising demand for Google’s specialisd AI chips, one of the few viable alternatives to Nvidia’s dominant gear. 

A report on Nov 24 that Meta Platforms is in talks to use Google’s chips sent shares of its parent Alphabet climbing. The stock has added nearly US$1 trillion (S$1.3 trillion) in market capitalisation since mid-October, helped by Mr Warren Buffett taking a US$4.9 billion stake during the third quarter and broader Wall Street enthusiasm for its AI efforts.

Alphabet shares rose 1.5 per cent to US$323.44 in New York on Nov 25, sending the company’s market capitalisation to nearly US$4 trillion.

SoftBank Group, one of OpenAI’s biggest backers, fell 10 per cent on the same day on worries about the competition from Google’s Gemini. Nvidia shares dropped 2.6 per cent, erasing US$115 billion in market value.

“Google has arguably always been the dark horse in this AI race,” said Mr Neil Shah, analyst and co-founder at Counterpoint Research. It’s “a sleeping giant that is now fully awake”.

For years, Google executives have argued that deep, costly research would help the company fend off rivals, defend its turf as the leading search engine and invent the computing platforms of tomorrow. Then ChatGPT came along, presenting the first real threat to Google search in years, even though Google pioneered the tech underpinning OpenAI’s chatbot. 

Still, Google has plenty of resources that OpenAI does not have: a corpus of ready data to train and refine AI models, flowing profits and its own computing infrastructure. 

“We’ve taken a full, deep, full-stack approach to AI,” Mr Sundar Pichai, CEO of Google and Alphabet, told investors last quarter. “And that really plays out.”

Any concerns that Google might be held back by regulators are dying away. It recently avoided the most severe outcome from a US anti-monopoly case – a break-up of its business – in part because of the perceived threat from AI newcomers. 

The search giant has shown some progress in a long-time effort to diversify beyond its core business. Waymo, Alphabet’s driverless car unit, is coming to several new cities and just added freeway driving to its taxi service, a feat made possible by the company’s enormous research and investment.

Some of Google’s edge comes from its economics. It is one of the few companies that produces what the industry calls the full stack in computing.

Google makes the AI apps people use, like its popular Nano Banana image generator, as well as the software models, the cloud computing architecture and the chips underneath.

The company also has a data gold mine for constructing AI models from its search index, Android phones and YouTube – data that Google often keeps for itself.

That means, in theory, Google has more control over the technical direction of AI products and does not necessarily have to pay suppliers, unlike OpenAI.

Several tech companies, including Microsoft and OpenAI, have plotted ways to develop their own semiconductors or forge ties that make them less reliant on Nvidia’s bestsellers. 

For years, Google was effectively its own sole customer for its home-grown processors, called tensor processing units, or TPUs, which the company first designed more than a decade ago to speed up the generation of search results and has since adapted to handle complex AI tasks.

That is changing. AI start-up Anthropic said in October said it would use as many as one million Google TPUs in a deal worth tens of billions of dollars.

On Nov 24, tech publication Information reported that Meta planned to use Google’s chips in its data centres in 2027. Google declined to address the specific plans, but said its cloud business is “accelerating demand” for both its custom TPUs and Nvidia’s graphics processing units.

“We are committed to supporting both, as we have for years,” a spokesperson wrote in a statement.

Analysts read the Meta news as a signal of Google’s success. “Many others have failed in their quest to build custom chips, but Google can clearly add another string to its bow here,” Mr Ben Barringer, head of technology research for Quilter Cheviot, wrote. BLOOMBERG

See more on