The global AI arms race is reaching an unprecedented new height. On February 27, 2026, OpenAI officially announced a massive new investment of 110 billion US dollars, which not only broke the record for the largest single investment in the global tech industry but also completely reshaped the foundation layout for AGI (Artificial General Intelligence).

With this huge sum of money secured, OpenAI announced a deep collaboration with NVIDIA and Amazon (AWS), building an unprecedented "computing empire."

Key Deal: A $100 Billion "Computing Agreement"

OpenAI's cooperation with Amazon has seen a leap in scale:

Significant Increase in Amount: The previous 7-year, $38 billion contract was formally extended and expanded into an 8-year, $100 billion computing deal.

Unprecedented Scale: The agreement involves about 2GW (gigawatts) of Trainium computing capacity. This level of energy and chip integration is sufficient to support the continuous evolution of next-generation "trillion-parameter" large models.

Hardware Iteration: The collaboration will deeply utilize AWS's self-developed Trainium3 chips and aim for the upcoming Trainium4.

Technical Outlook: Trainium4 Aims for 2027

As the "secret weapon" of this collaboration, the performance details of the Trainium4 chip, expected to be delivered in 2027, have been disclosed for the first time:

Optimized Computing Architecture: Native support for enhanced FP4 computing performance, aimed at significantly improving the energy efficiency ratio of large model inference and training.

Top Hardware Specifications: Features wider memory bandwidth and larger VRAM capacity, specifically tailored for real-time throughput demands of ultra-large parameter models.

Industry Insight: From "Renting Computing Power" to "Building an Ecosystem"

OpenAI