MiniMax M2.1 is officially launched. This model is specifically designed for real coding and native AI organization, and can easily handle a variety of needs from atmosphere building to serious workflows. MiniMax M2.1 is an SOTA (state-of-the-art) open-source coding and agent model with 10 billion activations. It scored 72.5% on the SWE-multilingual test and achieved an impressive 88.6% score on the newly released VIBE-bench test, surpassing several leading closed-source models such as Gemini3Pro and Claude4.5Sonnet.

image.png

The release of MiniMax M2.1 marks the birth of the most powerful open-source model in the era of agents. It performs excellently across multiple metrics including SWE-Verified, SWE-Multilingual, Multi-SWE, VIBE-Bench, and Terminal-Bench2.0. Among these, VIBE-Bench is the first comprehensive coding benchmark that covers various aspects from web development to Android, iOS, and backend workflows, providing developers with more extensive testing references.

image.png

During the launch event, the MiniMax team specially thanked early testing partners, developers, and ambassadors for their support and feedback. This week was a big day for open-source models, as GLM was released the day before, followed by the timely launch of MiniMax M2.1. Both models had nearly identical data on the SWE-Bench test. This fully demonstrates the strong capabilities of open-source models, achieving high levels of openness while surpassing closed-source models.

image.png

In multi-language programming, MiniMax M2.1 also performs outstandingly. It reaches SOTA levels in various programming languages such as Rust, Java, Go, C++, Kotlin, Obj-C, TypeScript, and JavaScript. In particular, it achieved a score of 72.5% on the SWE-bench Multilingual test, surpassing its competitors.

Official introduction: https://www.minimax.io/news/minimax-m21