Recently, NVIDIA officially announced a multi-year, cross-generation strategic partnership with Meta. According to the agreement, Meta plans to deploy millions of NVIDIA Blackwell GPUs and next-generation Rubin architecture GPUs, specifically designed for agent AI inference, within its large-scale AI data centers to strengthen its AI computing infrastructure.

Notably, this collaboration is not limited to the GPU field. Meta has also decided to adopt large-scale Arm-based Grace CPUs, marking the first time the Grace series of processors will be used in such a large-scale independent scenario.
Currently, the engineering teams from NVIDIA and Meta have quickly started joint optimization efforts aimed at achieving full-stack acceleration for Meta's core production-level AI workloads. This initiative will drive the deep integration of NVIDIA's CPU, GPU, networking technologies, and software toolchain with Meta's large-scale production environment. According to industry insiders, the scale of this collaboration is expected to reach hundreds of billions of dollars, undoubtedly becoming a milestone in the technology industry.
