
Nvidia has secured a significant multi-year agreement with Meta, marking a pivotal moment as the company ventures further into the CPU market to tackle growing competition. This deal, encompassing millions of chips, includes not only Nvidia’s Blackwell and Rubin GPUs but also introduces standalone Grace and Vera CPUs for the first time. Analysts estimate the deal to be worth billions of dollars, signifying a strategic shift for Nvidia as it begins selling CPUs separately, targeting the burgeoning inference market.
While GPUs are essential for training and inference of large AI models, CPUs offer a more cost-effective and energy-efficient solution for numerous smaller inference tasks. Meta’s decision to rely on Nvidia hardware contrasts with other hyperscalers like Amazon and Google, who develop their own processors. Reports suggest Meta's internal chip development has faced technical hurdles and delays.
Under this agreement, Meta will purchase millions of Nvidia chips, including the current Blackwell GPUs, forthcoming Rubin GPUs, and standalone Grace and Vera CPUs. The financial details of the deal remain undisclosed, but experts, like Ben Bajarin from Creative Strategies, estimate its value in the tens of billions. This partnership aligns with Meta CEO Mark Zuckerberg's plans to nearly double the company’s AI infrastructure investment to potentially $135 billion by 2026.
The noteworthy aspect of this deal is Meta’s decision to implement Nvidia's CPUs as standalone products at scale, departing from Nvidia’s previous strategy of bundling Grace processors with GPUs in "Superchips." In January 2026, Nvidia officially began offering CPUs independently, with neocloud provider CoreWeave being the first customer.
The AI industry is witnessing a shift from a GPU-heavy focus on training large models to an emphasis on inference, where trained models are executed. For these tasks, GPUs can be excessive. Bajarin highlighted this transition from a "training" era to an "inference era," which demands a new approach.
Ian Buck, Nvidia’s VP and General Manager of Hyperscale and HPC, noted that the Grace processor offers "2x the performance per watt" for backend tasks like database management. Meta has reportedly tested the Vera CPU, showing promising results. The Grace CPU utilizes 72 Arm Neoverse V2 cores with LPDDR5x memory, enhancing bandwidth and compactness. Meanwhile, the Vera CPU, featuring 88 custom Arm cores with multi-threading and confidential computing capabilities, is planned for deployment in 2027 for use in WhatsApp's encrypted messaging service.
Nvidia’s decision to market CPUs individually puts the company in direct competition with server market giants like Intel and AMD. Meta’s choice to purchase standalone Nvidia CPUs sets it apart from other major companies such as Amazon with its Graviton processors and Google with Axion, even as Meta develops its own AI chips, albeit facing technical challenges.
Nvidia is also dealing with intensified competition as Google, Amazon, and Microsoft introduce new proprietary chips. OpenAI, in collaboration with Broadcom, has partnered with AMD, and startups like Cerebras offer specialized inference chips, challenging Nvidia’s market dominance. To strengthen its position, Nvidia recently acquired talent from inference chip company Groq.
Last year, Nvidia’s stock saw a slight dip following rumors of Meta negotiating with Google to use its Tensor Processing Units, though no agreement has been confirmed. Despite these developments, Meta continues to operate AMD Instinct GPUs and is involved in the design of AMD’s upcoming Helios rack systems.