The top domestic video generation model, Kling AI, has officially entered its 3.0 era today. This update is not a simple parameter iteration, but a comprehensive upgrade in video and image generation capabilities, covering from the underlying logic to application tools. The new version focuses on narrative ability, visual controllability, and multimodal collaboration, aiming to provide professional creators and general users with a smoother "native" creative experience.

image.png

In the field of video generation, Kling 3.0 introduces a breakthrough "intelligent storyboarding" feature. Previously, AI videos were often limited by single-shot expression, but the new version can automatically understand script requirements and autonomously manage shot types and camera positions, making AI-generated short films more cinematic. At the same time, the image-to-video capability has also seen a qualitative improvement, allowing users to use multiple images or video clips as main references to precisely lock onto character appearances and scene features. In addition, the video duration has been extended to 3-15 seconds, combined with multilingual lip-sync technology, making complex plots in long shots appear more natural.

Regarding image generation, the Image 3.0 series model also brings surprises. By introducing film-level lighting reconstruction technology, the model can more accurately interpret visual elements in the prompt, outputting native 2K or even 4K high-resolution quality. For collaborative creation needs, the new version supports the fusion of up to 10 reference images. Whether it's color consistency or style transfer, users can complete tasks with one click without frequently switching functions.

Currently, some new features of Kling AI 3.0 are available for early experience by Black Gold members. According to official information, more features will be continuously launched in the coming months, further streamlining the full AI creation process from concept to final product.

Key Points:

  • 🎬 Narrative Ability Rebuilding: New intelligent storyboarding and 15-second long take support, allowing AI to automatically manage camera positions and shot types like a director, reducing the entry barrier for professional-level video creation.

  • 🧬 Consistency Deep Evolution: Supports multi-image/video main reference, precisely locking characters and props' features, and achieving multilingual lip-sync accuracy, enhancing character expressiveness.

  • 🖼️ Image Quality and Process Upgrade: Image model supports native 4K output and multi-image style fusion, and introduces a first-of-its-kind multimodal integrated creation workflow, covering the entire chain from generation to editing.