ChatGPT maker OpenAI to build its first in-house AI chip with Broadcom, TSMC
Sign up now: Get ST's newsletters delivered to your inbox
OpenAI, the fast-growing company behind ChatGPT, has examined a range of options to diversify chip supply and reduce costs.
PHOTO: REUTERS
Follow topic:
New York – OpenAI is working with Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC) to build its first in-house chip designed to support its artificial intelligence (AI) systems, while adding AMD chips alongside Nvidia chips to meet its surging infrastructure demands, sources told Reuters.
OpenAI, the fast-growing company behind ChatGPT, has examined a range of options to diversify chip supply and reduce costs. OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories, known as foundries, for chip manufacturing.
The company has dropped the ambitious foundry plans for now due to the cost and time needed to build a network, and plans instead to focus on in-house chip design efforts, according to sources, who requested anonymity as they were not authorised to discuss private matters.
The company’s strategy highlights how the Silicon Valley start-up is leveraging industry partnerships and a mix of internal and external approaches to secure chip supply and manage costs like larger rivals Amazon, Meta, Google and Microsoft.
As one of the largest buyers of chips, OpenAI’s decision to source from a diverse array of chipmakers while developing its customised chip could have broader tech sector implications.
Broadcom stock jumped on Oct 29 following the report, closing up over 4.5 per cent, while AMD shares ended the day up 3.7 per cent.
OpenAI, which helped commercialise generative AI that produces human-like responses to queries, relies on substantial computing power to train and run its systems. As one of the largest purchasers of Nvidia’s graphics processing units (GPUs), OpenAI uses AI chips both to train models where the AI learns from data and for inference, applying AI to make predictions or decisions based on new information.
OpenAI has been working for months with Broadcom to build its first AI chip focusing on inference, according to sources. Demand right now is greater for training chips, but analysts have predicted the need for inference chips could surpass them as more AI applications are deployed.
Broadcom helps companies, including Alphabet unit Google, fine-tune chip designs for manufacturing and also supplies parts of the design that help move information on and off the chips quickly. This is important in AI systems, where tens of thousands of chips are strung together to work in tandem.
OpenAI is still determining whether to develop or acquire other elements for its chip design, and may engage additional partners, said two of the sources.
The company has assembled a chip team of about 20 people, led by top engineers who previously built tensor processing units, which help speed up machine learning workloads, at Google.
Sources said that through Broadcom, OpenAI has secured manufacturing capacity with TSMC to make its first custom-designed chip in 2026. They said the timeline could change.
Currently, Nvidia’s GPUs hold more than 80 per cent market share. But shortages and rising costs have led major customers like Microsoft, Meta and now OpenAI to explore in-house or external alternatives.
Training AI models and operating services like ChatGPT are expensive. OpenAI has projected a US$5 billion (S$6.6 billion) loss in 2024 on US$3.7 billion in revenue, according to sources. Compute costs – or expenses for the hardware, electricity and cloud services needed to process large data sets and develop models – are the company’s largest expense, prompting efforts to optimise utilisation and diversify suppliers.
OpenAI has been cautious about poaching talent from Nvidia because it wants to maintain a good rapport with the chip maker it remains committed to working with, especially for accessing its new generation of chips named Blackwell, sources added. REUTERS

