Meta, the parent company of Facebook, is testing its first in-house chip designed for training artificial intelligence systems—a significant step in its effort to develop custom silicon and reduce dependence on external suppliers like Nvidia (NVDA.O). According to two sources who spoke with Reuters, the social media giant has initiated a limited deployment of the chip and plans to scale up production for broader use if the trials prove successful.
Meta is working with Taiwanese chipmaker TSMC to develop the chip, which it calls MTIA and belongs to its Meta Training and Inference Accelerator series. The source said the chip is a “dedicated accelerator” designed to handle AI-specific tasks, making it more efficient than GPUs used for such workloads.
The chip, which is in the early stages of development, could be available next year for Meta’s data centers, which process recommendations and other applications that serve over three billion users daily. The chips could eventually be used for generative AI models, which will help Meta create content and apps that adapt to each user’s interests, culture, and personality. Training such models requires much computing power, so the industry has moved beyond traditional central processing units (CPUs) to more advanced GPUs.
Meta’s effort is a more significant trend toward companies owning more of their infrastructure as they seek to lower costs and accelerate innovation. Amazon Web Services (AWS) is also working on GPU alternatives with a new line of chips called Trainium and Inferentia. Both are designed to run and train AI models more efficiently, reducing operational costs.
Before joining Meta in 2020, Nicolaas helped build network processors, which convert signals into the electronic impulses that drive computers. The chip he built, which was based on the OpenCL programming language, used tiny threads to carry information around. These are much more energy-efficient than a standard CPU’s large cores, which perform one task simultaneously.
A successful launch of the MTIA chip could help Meta slash its dependence on GPUs, which currently cost it billions in annual capital expenditures. The move would also reduce infrastructure costs as Meta scales up its AI operations.
The MTIA chips will be critical to helping Meta scale its AI efforts, which will require more and more powerful hardware to train the increasingly complex and data-intensive generative AI models that are driving much of today’s hype around the technology. In the long term, developing its own hardware and AI capabilities could help Meta control its infrastructure costs and improve privacy protections for its users. That could align with Zuckerberg’s vision for AI, which he has described as an essential way to connect people, build community, and make the world more meaningful. In addition, it could give Meta more control over how its data is processed and protected. That could be particularly important as the company expands into augmented reality and other technologies.