According to the Wall Street Journal, on January 14, a controversy has emerged around South Korea's government-funded "domestic large model competition": at least three of the five finalists have been accused of using open-source code from Chinese and American companies, including Zhipu AI, Alibaba, OpenAI, and DeepSeek, sparking an intense debate about whether "domestic AI is truly independent."

This national project, which began in June 2024, aims to create a purely Korean technology large model with performance reaching 95% of international leading models within three years, to reduce reliance on Chinese and American tech giants and ensure national economic and security interests. The winner will receive access to high-quality data, talent, funding, and key AI chip usage rights provided by the government. However, idealism meets reality—when technological globalization and open-source collaboration have become industry norms, the "build-from-scratch" approach to autonomy seems increasingly unrealistic.

The focus of the controversy has centered on the finalist Upstage. Sionic AI CEO Ko Seok-hyeon publicly accused its model of having parts that closely resemble Zhipu AI's open-source code, even retaining copyright markers, questioning whether it was "a Chinese model in disguise applying for taxpayer funds." Although Upstage held a live broadcast press conference to demonstrate complete training logs proving its core model was self-developed and explained that it only used globally widely adopted Zhipu open-source components in the inference framework (not the training core), Ko Seok-hyeon later apologized, but the storm had already begun.

Subsequently, Naver and SK Telecom were also involved. Naver was accused of having visual and audio encoders similar to Alibaba's Tongyi Qianwen and OpenAI products; SK Telecom was found to have inference code similar to DeepSeek's open-source library. Both companies emphasized that their core training engines were completely self-developed, and external components were only used for standardized input/output processing, which is a common industry practice.

In response, academic opinions are divided. Professor Wei Yu Yan from Harvard University stated, "Refusing open-source software means giving up technological dividends. It is neither realistic nor necessary to develop every line of code independently." Lee Jae-mo, director of the AI Research Institute at Seoul National University, also confirmed that the core parameter training process of the questioned models was indeed started from scratch, without directly copying foreign model weights.

However, opponents are concerned: even if only using peripheral code, it may introduce potential backdoors or dependency risks, weakening the strategic significance of "sovereign AI." Currently, the Ministry of Science has not yet made a clear definition on whether foreign open-source code is allowed in the competition rules, but Minister Bae Kyung-hoon said he welcomed the technical debate, saying, "This is exactly the bright future of South Korea's AI."

In the global wave of accelerating the construction of "AI sovereignty," South Korea's dilemma reflects a universal challenge: in a highly interconnected AI ecosystem, where exactly lies the boundary of true "technological independence"? Is it line-by-line self-development of code, or the control of core algorithms and data sovereignty? This incident may not have a simple answer, but it undoubtedly sounds a warning bell for countries around the world—AI autonomy is far more than just a model competition; it is a systemic endeavor encompassing technology ethics, industrial policy, and global collaboration.