During the 2026 CES exhibition, NVIDIA CEO Jensen Huang provided an authoritative assessment of the open-source AI wave in 2025: while open-source large models have reached the technological frontier, they still lag by about six months compared to closed-source "top three" models like Google Gemini, Anthropic Claude, and OpenAI GPT. This judgment accurately summarizes the core landscape of the current AI competition - open-source and closed-source models are racing side by side, with manageable gaps but difficult to catch up.

 2025: A Year of Open-Source Highlights and Closed-Source Reversal

In early 2025, Chinese open-source forces once amazed the world: models such as DeepSeek R1 and Tongyi Qianwen (Qwen) performed exceptionally well in tasks such as code, multilingual processing, and reasoning, sparking optimism that "open-source is the mainstream."  

However, in the second half of the year, closed-source giants made a strong comeback:

- Google Gemini3 series continuously set new records in multimodal and reasoning rankings;

- Anthropic Claude, with its outstanding code generation and engineering comprehension capabilities, became the preferred choice for developers;

- OpenAI GPT-5, despite ongoing controversies, remains the top model in API usage and commercial applications.

Although the open-source community was active, it was difficult to challenge the systemic advantages of closed-source models in data scale, computing power investment, and engineering optimization.

 Jensen Huang's Six-Month Rule: The Gap Exists, But Not an Insurmountable Chasm

Jensen Huang pointed out that the true value of open-source large models lies in democratizing AI:

- Download numbers exploded, allowing every country, company, and developer to participate in innovation;

- They can be freely or low-cost deployed, greatly lowering the threshold for AI application;

- The technology is transparent, making it easy for auditing and customization, especially suitable for high-compliance scenarios such as government and finance.

However, he also admitted, "Top closed-source models are still about six months ahead." This window period is exactly the result of the investments by the giants, including thousands of H100/B100 chips, trillions of tokens trained, and hundreds of millions of dollars in costs.

 "Six Months per Generation": AI Competition Enters an Ultra-Fast Iteration Cycle

More importantly, the pace of AI evolution has been compressed to "one generation every six months":

- Closed-source companies release stronger models every six months, solidifying their lead;

- Open-source communities quickly catch up through methods such as distillation, fine-tuning, and MoE architecture;

- The result is that the gap remains around six months, neither widening nor narrowing.

For ordinary users and small and medium-sized enterprises, open-source models are sufficient to meet most scenarios - writing code, providing customer service, analyzing data, and generating content. Closed-source models, on the other hand, focus on high-precision, high-reliability, and high-concurrency commercial core scenarios.

 AIbase Observation: Open Source Is Not a Replacement, But a Symbiosis

Jensen Huang's evaluation reveals a reality: open source and closed source are not zero-sum games, but rather the "dual engines" of the AI ecosystem.

- Closed-source provides the technical ceiling and commercial benchmark;

- Open-source ensures technology accessibility, innovation vitality, and supply chain security.

Especially against the backdrop of increasing global geopolitical tensions, having high-performance open-source large models has become part of national strategic capability. The rise of Chinese models such as Qwen, DeepSeek, and MiniMax exemplifies this logic.