The competition for large model (LLM) computing power is moving deeper into more fundamental and specialized chip fields. On February 24, 2026, MatX, an AI chip startup founded by a senior engineer from Google's TPU team, announced that it had completed a B-round financing of $500 million (approximately 3.445 billion RMB).
This round of financing featured a stellar lineup, not only attracting strategic participation from semiconductor giants such as
Core Weapon: MatX One Chip
Innovative Architecture: It uses a "partitionable systolic array" structure. This design cleverly combines the ultra-energy efficiency of a large array with the scheduling flexibility of a small array, maximizing hardware utilization.
Storage Black Tech: The MatX One integrates the extremely low latency of SRAM design and the long context processing capability of HBM (High Bandwidth Memory), breaking through traditional architectural storage bottlenecks.
Full Scenario Adaptation: Whether it's basic prefill, high-frequency decoding, or complex reinforcement learning training, the MatX One can provide industry-leading performance.
Business Prospects: Lower LLM Usage Costs
In today's computing power market, how to reduce the cost of token output is a common goal for all model manufacturers.
Industry Overview: The AI Chip Battle Is Intensifying
SambaNova released its fifth-generation RDU chip and reached a deep collaboration with Intel.Positron announced the Asimov chip, claiming that its energy efficiency per watt can reach five times that of NVIDIA's Rubin architecture.Domestic Breakthrough: A research team in China recently successfully developed a flexible AI chip costing less than $1, which can withstand 40,000 folds, indicating new possibilities for wearable AI hardware.
