OpenAI is moving forward with efforts to reduce its dependence on Nvidia (NVDA.O) by developing its first in-house artificial intelligence chip. The ChatGPT maker is expected to finalize the design in the coming months and plans to have it manufactured by Taiwan Semiconductor Manufacturing Co (2330.TW), a process that could cost tens of millions of dollars and take about six months to produce a functional chip, sources told Reuters. This initial fabrication stage, known as “taping out,” does not guarantee a fully operational prototype. Unlike advanced consumer-grade chips, the first version may require troubleshooting and multiple iterations, potentially delaying progress.
The first chip is expected to specialize in inference, the application of trained AI models that interpret data or make decisions in real-time applications. Currently, most of the computing power required to run these applications is supplied by Nvidia’s GPUs, but shortages and rising prices have led many customers, including Microsoft and Meta, to seek alternatives.
Initially, OpenAI had considered designing a chip that could be used for training and inference. Still, the high chip design and production costs, combined with the long timelines, made that option impractical. The company has now decided to focus on designing a chip for inference while maintaining Nvidia and AMD GPUs for training.
According to a source familiar with the project, the in-house chip will use TSMC’s 3nm technology and incorporate a systolic array architecture paired with high-bandwidth memory, similar to the graphics processing units (GPUs) used by Nvidia. In addition, the chip is expected to have an integrated security system that will ensure that only approved AI applications can access the hardware.
In terms of cost, it is not clear how much the chip will cost. Still, some analysts have predicted that bespoke AI hardware may lead to lower overall costs for computational power in the longer term, enabling wider access to advanced AI applications. This is especially true given that training and deploying large AI models require massive computing resources, which has been one of the main drivers for OpenAI’s decision to move into chip design.
The initiative is viewed internally as a strategic move designed to increase the company’s negotiation leverage with leading chip suppliers and set an example for other tech firms seeking more control over their hardware components. If successful, this could trigger a new trend in the industry and reshape the future landscape of AI hardware. Zachary Clexon is an insightful writer specializing in new technologies and fintech, with a keen eye for understanding emerging trends and their implications for the financial world. He has a degree in Information Technology from the University of California at Berkeley, where he honed his ability to communicate complex technological concepts to a broad audience.

