Within the field of AI video generation, the wave of technological iteration is accelerating from "content generation" to "real-time interaction." Today, leading company in the AI video generation industry officially announced the completion of its C-round financing. This round of financing was led by Donyi and attracted participation from well-known Chinese companies such as Ruyi, Sanqi Interactive Entertainment, as well as several global renowned institutions.

image.png

Key Breakthrough: World's First Real-Time World Model PixVerse R1

At the time of the funding announcement, it also launched its groundbreaking technological achievement — a general-purpose real-time world model, PixVerse R1. This move marks a qualitative shift in AI video technology from "one-way viewing" to "real-time response":

  • From Generation to Interaction: PixVerse R1 allows users to control and continuously extend videos in real-time during the generation process. Videos are no longer static clips but interactive digital worlds.

  • Leading in Reasoning Efficiency: While maintaining ultra-high image quality, the company significantly improved reasoning efficiency through algorithmic innovation, securing a top-tier position in the rankings of the global authoritative evaluation institution Artificial Analysis.

The ultimate form of AI video will be a "digital interactive world." Relying on PixVerse R1, the product form is undergoing a structural transformation:

User-Generated Content (UGC) Activity Surges: Currently, PixVerse has exceeded 100 million users, and daily active users on the real-time generation platform have quickly surpassed ten thousand within a short period, with users spontaneously building characters and worldviews in the community.

New Heights in Computing Power Consumption: The token consumption in real-time interaction scenarios has seen a hundredfold increase compared to traditional generation models. This high-frequency interaction indicates a deeper sense of immersion and social value.

This round of funding will mainly be invested in the continuous iteration of the video foundation model, cutting-edge research on real-time world models, and the global launch of the next-generation interactive entertainment products. Currently,