Competition among domestic AI large models is entering a feverish stage. After DeepSeek V4 sparked market debate, the next-generation large model Kimi K3 under Moonshot AI also reported new progress. According to relevant information, Kimi K3 is expected to officially launch in the third quarter of this year, with its parameter scale potentially reaching an astonishing 2.5 trillion.
In the AI field, parameter scale is usually regarded as a hard metric for measuring model capabilities. For comparison, the parameter count of the recently released DeepSeek V4 Pro is 1.6 trillion, while Baidu's Wenxin 5.0 has about 2.4 trillion parameters. This means that Kimi K3 not only doubles the data volume compared to its previous version K2.X, but will also surpass most mainstream models domestically and challenge the performance tier of top global AI models.
Aside from a leap in computing power, context processing capability is also a core competitiveness of the Kimi series. It is reported that the context length standard of Kimi K3 will be increased to around 1M (about 1 million words), far exceeding the 256K supported by the current K2.6 version. Although internal test data even exceed this significantly, considering the massive computing power consumption and operational costs, the final context length available to ordinary users remains to be officially announced.
The current domestic model market is on two tracks: "cost-effectiveness" and "extreme performance." On one side, DeepSeek is pushing the limits of computing power optimization and accessibility. On the other side, models represented by Kimi are continuously striving for long text and ultra-large-scale parameters. The introduction of Kimi K3 will undoubtedly raise the competition threshold of domestic large models, offering users a deeper level of logical reasoning and information processing experience.
