According to China News Service, the "self-reliant AI foundation model" competition launched by the South Korean government in June last year has sparked a technical controversy. Among the five companies that made it to the finals, three have been found to have used at least partial code from foreign open-source AI models, with Chinese models becoming the main "reference."

This three-year competition, aimed at building a "national AI team" in South Korea, is led by the Ministry of Science and Information and Communications Technology. Five companies - Naver Cloud, Upstage, SK Telecom, NC AI, and LG AI Research - have advanced to the finals. The competition aims to select two domestic companies by 2027 whose AI models must achieve more than 95% performance levels of industry leaders like OpenAI or Google.

Robot Artificial Intelligence AI

The controversy was first raised by Upstage. Ko Suk-hyun, CEO of competitor Sionic AI, pointed out that some components of Upstage's AI model showed similarities to the open-source model of Zhipu AI, and the code still retained Zhipu AI's copyright notice. Upstage later held a live stream to verify, admitting that its reasoning code used open-source components from Zhipu AI, but emphasized that the model itself was independently developed and trained from scratch.

Subsequently, Naver was accused of having visual and audio encoders similar to those of Alibaba and OpenAI products; SK Telecom's reasoning code was also said to be similar to DeepSeek's model code. Both companies admitted using external encoders, but emphasized that their core engine was completely self-developed.

It is worth noting that the competition rules themselves do not explicitly prohibit the use of foreign open-source code. Professor Gu-Yeon Wei from Harvard University stated, "Abandoning open-source software means giving up huge benefits," and it is unrealistic to require all code to be written by the country itself. However, some people in the South Korean industry are concerned that using foreign tools may bring security risks and weaken the original intention of cultivating local AI models.