The global AI arms race is reaching an unprecedented new height. On February 27, 2026,
With this huge sum of money secured,
Key Deal: A $100 Billion "Computing Agreement"
Significant Increase in Amount: The previous 7-year, $38 billion contract was formally extended and expanded into an 8-year, $100 billion computing deal.
Unprecedented Scale: The agreement involves about 2GW (gigawatts) of Trainium computing capacity. This level of energy and chip integration is sufficient to support the continuous evolution of next-generation "trillion-parameter" large models.
Hardware Iteration: The collaboration will deeply utilize AWS's self-developed Trainium3 chips and aim for the upcoming Trainium4.
Technical Outlook: Trainium4 Aims for 2027
As the "secret weapon" of this collaboration, the performance details of the Trainium4 chip, expected to be delivered in 2027, have been disclosed for the first time:
Optimized Computing Architecture: Native support for enhanced FP4 computing performance, aimed at significantly improving the energy efficiency ratio of large model inference and training.
Top Hardware Specifications: Features wider memory bandwidth and larger VRAM capacity, specifically tailored for real-time throughput demands of ultra-large parameter models.
