China's large models have reached new heights. MiniMax officially open-sourced its latest coding and agent-specific large model, M2.1, today, featuring a sparse architecture with 10 billion activated parameters, achieving comprehensive breakthroughs in core scenarios such as multilingual programming, real code generation, and tool invocation. On authoritative benchmarks SWE-Multilingual and VIBE-Bench, M2.1 not only significantly outperforms other open-source models but also surpasses closed-source industry leaders like Google's Gemini 3 Pro and Anthropic's Claude 4.5 Sonnet, marking the beginning of a new era where open-source coding models can "outperform closed-source ones" in terms of performance.

Comprehensive superiority in real programming scenarios, leading in multilingual SOTA
M2.1 is designed for developers' daily coding needs and native AI agents (Agent). Its core strengths include:
- Leading multilingual programming SOTA: It achieves the highest level among open-source models in mainstream languages such as Python, JavaScript, Java, Go, Rust, and C++, especially excelling in cross-language transfer capabilities and understanding complex project contexts;
- Stronger performance on real engineering tasks: In the SWE-Multilingual (Software Engineering Multilingual Benchmark), M2.1 shows significantly higher code repair accuracy and end-to-end task completion rates compared to Gemini 3 Pro and Claude 4.5 Sonnet;
- Optimization for agent collaboration: It performs excellently in core Agent capabilities such as tool invocation, API integration, and error diagnosis in the VIBE-Bench (Visual-Agent & Interactive Behavior Evaluation), providing a strong foundation for building high-reliability AI developer agents.
Sparse activation architecture, high performance with low inference cost
M2.1 adopts a mixed-expert (MoE) sparse activation mechanism, activating approximately 10 billion parameters during inference (with a larger total parameter count), significantly reducing computing power consumption while maintaining performance. This enables developers to efficiently run it on consumer-grade GPUs or cloud instances, promoting the "democratization" of high-performance coding models.
Open-source ecosystem accelerates growth, domestic models catch up rapidly
Notably, just one day before the release of M2.1, Zhipu AI open-sourced a new model in its GLM series, showing comparable performance to M2.1 in the single-language test of SWE-Bench, jointly demonstrating the explosive strength of Chinese open-source large models in professional fields. The MiniMax team specifically thanked early testing partners for their feedback, emphasizing that M2.1 is an engineered product "built for real developers," not merely a model for benchmarking.
AIbase believes that the release of M2.1 is not only a technological milestone but also sends a key signal: in vertical specialized fields, open-source models now have the ability to comprehensively challenge, even surpass, closed-source giants. When developers no longer need to rely on APIs and can freely deploy, fine-tune, and audit code models, the true era of democratizing AI programming will begin — and this open-source revolution led by MiniMax is reshaping the future of the global developer toolchain.
Official documentation: https://www.minimax.io/news/minimax-m21
