Meta’s 5-gigawatt Hyperion data center under construction in Richland Parish, Louisiana, Jan. 9, 2026.
Meta
Meta will use millions of Nvidia chips in its artificial intelligence data centers, including Nvidia’s new standalone CPUs and next-generation Vera Rubin systems, in a sweeping new deal announced Tuesday.
Meta CEO Mark Zuckerberg said in a statement that the expanded partnership continues his company’s push “to deliver personal superintelligence to everyone in the world,” a vision he announced in July.
Financial terms of the deal were not provided.
In January, Meta announced plans to spend up to $135 billion on AI in 2026. “The deal is certainly in the tens of billions of dollars,” said chip analyst Ben Bajarin of Creative Strategies. “We do expect a good portion of Meta’s capex to go toward this Nvidia build-out.”
The partnership is nothing new, as Meta has been using Nvidia graphics processing units for at least a decade, but the deal marks a significantly broader technology partnership between the two Silicon Valley-based giants.
Standalone CPUs are the biggest new thing in the deal, with Meta becoming the first to deploy Nvidia’s Grace central processing units as standalone chips in its data centers, as opposed to incorporated alongside GPUs in a server. Nvidia said it’s the first large-scale deployment of Grace CPUs on their own.
“They’re really designed to run those inference workloads, run those agentic workloads, as a companion to a Grace Blackwell/Vera Rubin rack,” Bajarin said. “Meta doing this at scale is affirmation of the soup-to-nuts strategy that Nvidia’s putting across both sets of infrastructure: CPU and GPU.”
The next-generation Vera CPUs are planned to be deployed by Meta in 2027.
The multiyear deal is part of Meta’s overall commitment to spend $600 billion in the U.S. by 2028 on data centers and the infrastructure the facilities require.
Meta has plans for 30 data centers, 26 of which will be based in the U.S. Its two largest AI data centers are under construction now: the Prometheus 1-gigawatt site in New Albany, Ohio, and the 5-gigawatt Hyperion site in Richland Parish, Louisiana.
Also included in the deal is Nvidia’s networking technology, Spectrum-X Ethernet switches, which are used to link GPUs together within large-scale AI data centers. Meta will also use Nvidia’s security capabilities as part of AI features on WhatsApp.
The social media giant doesn’t solely rely on the top chipmaker. In November, Nvidia stock fell 4% on reports that Meta was considering using Google‘s tensor processing units in its data centers in 2027.
Meta also develops in-house silicon processors and utilizes chips from Advanced Micro Devices, which won a notable deal with OpenAI in October as AI giants seek a second source to Nvidia amid constrained supply.
Nvidia’s current Blackwell GPUs have been on back-order for months, and the next-generation Rubin GPUs recently went into production. With the deal, Meta has secured a healthy supply of both.
Engineering teams from Nvidia and Meta will work together “in deep codesign to optimize and accelerate state-of-the-art AI models” for the social media giant.
Meta has been developing a new frontier model dubbed Avocado as a successor to its Llama AI technology. The most recent version released last spring failed to excite developers, CNBC previously reported.
Meta’s stock has been on a roller coaster in recent months, and its AI strategy in particular has puzzled Wall Street.
The stock saw its worst day in three years in October after the company announced ambitious AI spending, then popped 10% in January after reporting stronger-than-expected sales guidance.
CNBC’s Jonathan Vanian and Kristina Partsinevelos contributed to this report.

https://image.cnbcfm.com/api/v1/image/108256656-1769211369860-Met_Hyperion_Louisiana_data_center_Jan_9_2026.png?v=1771359586&w=1920&h=1080
2026-02-17 15:19:51















Leave a Reply