On November 18, Ant Group officially launched its multi-modal general AI assistant "Lingguang," which innovatively realizes "natural language generating a mini-app in 30 seconds" on mobile devices, and is editable, interactive, and shareable. Lingguang is also the first AI assistant in the industry to generate full-code multi-modal content. It initially launched three features: "Lingguang Dialogue," "Lingguang Flash App," and "Lingguang Eye," supporting the output of full-modal information such as 3D, audio-video, charts, animations, and maps. The dialogue becomes more vivid, and communication becomes more efficient. Currently, Lingguang has been launched on both the Android and Apple app stores.

1763430151246.jpg

(Caption: On November 18, the Lingguang App was launched on the app store)

"Lingguang Dialogue" breaks away from the traditional text-based Q&A model. Instead of piling up text, it designs each conversation like an exhibition: by using structured thinking, it makes AI responses logical and concise; by generating visual content such as dynamic 3D models, interactive maps, and audio-visuals, it makes the content presentation more vivid; finally, by organizing information in an excellent way, it allows users to "understand quickly." This design that combines logical strength and information aesthetics also reflects Lingguang's product philosophy: making complex things simple.

For example, in educational scenarios, when users ask knowledge-based questions to Lingguang, it can identify and extract key points, present them logically and hierarchically, and generate 3D animated images, interactive diagrams, etc., making complex information clear at a glance.

1763430185410.jpg

(Caption: The Lingguang Dialogue interface presents a minimalist style while providing diverse forms of information display)

This interactive answer that can be generated in seconds and is minimalistic and diverse is backed by Lingguang's ability to generate multi-modal outputs based on full code. All the results presented, including charts, animations, and mini-app components, are generated and presented to users by the model according to the conversation context in real time. At the same time, Lingguang has built an Agentic architecture with multiple intelligent agents collaborating, which can dynamically schedule image, 3D, animation, and other specialized agents and tools for real-time collaboration, providing users with a more complete, rich, and immersive visual experience.

Notably, Lingguang has introduced the "Flash App" feature for ordinary users for the first time. When users say or input a sentence in a conversation, Lingguang can generate an AI application within one minute, as fast as 30 seconds. Whether it's a fitness plan tool, a travel planner, or a healthy recipe generator, all can be generated with a single sentence, allowing users to customize parameters and use and share immediately. This function of quickly generating daily life mini-applications enables ordinary people to enjoy the productivity transformation brought by AI coding without any barriers.

For example, if a user asks "How long should a soft-boiled egg be cooked?" Lingguang can generate a "Soft-Boiled Egg Time Calculator," where users can choose "egg size" and "desired doneness" according to their actual situation and adjust to get the most suitable answer. If a user wants to know the most cost-effective way to maintain a car, Lingguang can generate a "Car Maintenance Cost Calculator," allowing users to freely choose mileage, fuel costs, etc., to create highly personalized car maintenance plans.

bd3ac6fc8729d185b998dd674510cb6.png

(Caption: Lingguang Dialogue can trigger Flash Apps, generating daily life mini-apps in as fast as 30 seconds)

Notably, the flash apps generated by Lingguang are not just static front-end pages but can directly call backend capabilities such as large models, enabling applications not only to display results but also to interact with external systems in real time, significantly expanding the range of possible scenarios.

As a multi-modal general AI assistant, the "Lingguang Eye" function is equipped with AGI camera technology, which can observe and understand the physical world through real-time video streams and supports various creation modes such as text-to-image/video and image-to-image/video. For example, in a tourism scenario, when users point the camera at a building they want to learn about, Lingguang can "see" it in real time and provide explanations.

As a product-level exploration of Ant Group's AGI (General Artificial Intelligence) strategy, Lingguang accurately grasps the trend of the 2025 AI application market shifting toward scenario-based productivity tools. Its core concept, "making complex things simple," redefines the productivity boundaries of general AI assistants by embedding application development into daily conversations.

According to reports, since 2025, Ant Group has accelerated its AGI layout, having released the AI medical assistant AQ and invested in embodied intelligence company Lingbo Technology. The Ant Bai Ling large model has also joined the ranks of trillion-parameter models. The release of Lingguang further demonstrates Ant Group's full-chain capabilities in the field of general artificial intelligence, from technological breakthroughs to scenario implementation.