OpenAI is reportedly collaborating with Broadcom and TSMC to develop its first in-house chip designed to support its artificial intelligence (AI) systems. Additionally, it is incorporating AMD chips alongside Nvidia chips to meet growing infrastructure demands, Reuters reported, citing sources.
Also Read: OpenAI Raises USD 6.6 Billion to Accelerate AI Research and Expansion
OpenAI Explores Custom Chip Development
OpenAI, the company behind ChatGPT, has reportedly explored various options to diversify chip supply and reduce costs. These options included building everything in-house and raising capital for an expensive plan to build a network of factories, or "foundries," for chip manufacturing, according to the report.
However, the company has reportedly dropped the foundry plans due to the costs and time required to build such a network and will instead focus on in-house chip design efforts. As a major chip buyer, OpenAI's decision to source from multiple chipmakers while developing its own custom chip could have broader implications for the tech industry.
Also Read: Microsoft, Dell, Google and Others Launch Initiatives to Propel AI Infrastructure and Innovation
Broadcom’s Role in OpenAI’s AI Chip Design
According to the report, OpenAI has been working with Broadcom for months to develop its first AI chip focused on inference. Broadcom assists companies like Alphabet’s Google in fine-tuning chip designs for manufacturing and supplies components that facilitate rapid data transfer on and off the chips. This capability is crucial for AI systems, which require tens of thousands of interconnected chips to work seamlessly together, the report noted citing sources.
OpenAI is still determining whether to develop or acquire other elements for its chip design and may bring in other partners, the report stated, citing two of the sources. OpenAI has assembled a chip team of about 20 engineers, led by former Google engineers who previously built Tensor Processing Units (TPUs), including Thomas Norrie and Richard Ho.
Sources indicated that OpenAI, through Broadcom, has secured manufacturing capacity with Taiwan Semiconductor Manufacturing Company to produce its first custom-designed chip by 2026, though the timeline may be subject to change.
Also Read: AI Can Help You Be More Productive at Work, Says Microsoft CEO and More
AMD and Nvidia
The report also highlighted OpenAI's planned use of AMD chips through Microsoft's Azure, illustrating how AMD's new MI300X chips are attempting to capture a portion of the market dominated by Nvidia. AMD has projected USD 4.5 billion in AI chip sales for 2024, following the chip's launch in Q4 2023.
Projected Losses and Compute Costs
Running AI models and services like ChatGPT is costly. According to the report, sources indicated that OpenAI projects a USD 5 billion loss this year, against USD 3.7 billion in revenue. Compute costs—encompassing hardware, electricity, and cloud services for processing vast datasets and training models—remain the company's largest expense, prompting initiatives to optimise resource use and diversify suppliers.