China's large model once again刷新 global perception. Baidu officially released and launched the latest version of the WENXIN large model - ERNIE-5.0-0110, which ranked eighth globally with a score of 1460 in the latest text capability ranking on the authoritative evaluation platform LMArena, becoming the only domestic large model to enter the top ten of this list.

More notably, it has made breakthroughs in specialized fields. In mathematical reasoning, a field long considered a weakness of domestic models, ERNIE-5.0-0110 rose to second place globally, just behind the unreleased GPT-5.2-High version. This means that Chinese AI not only has established a solid foundation in general language understanding, but also demonstrates world-class competitiveness in high-level logical and symbolic reasoning tasks.

image.png

LMArena is widely recognized as a multi-dimensional arena for large models. Its ranking comprehensively considers the model's performance in question answering, creation, reasoning, coding, and other aspects, with high credibility. The inclusion of ERNIE-5.0-0110 marks that domestic large models have moved from "usable" to "practical," and have approached or even surpassed international top levels in key capabilities.

This breakthrough is not accidental. Baidu has continuously focused on the underlying architecture and training methods of the WENXIN large model, constantly iterating in knowledge enhancement, logical reasoning, and multimodal collaboration. The significant improvement in mathematical ability particularly reflects remarkable optimization in formal reasoning and problem decomposition mechanisms - core capabilities essential for building truly intelligent agents.

image.png