Tencent Cloud Vector Database Fully Opens Public Beta, Providing Efficient Access to Large Model Solutions


MiniMax has successfully deployed an Agent reinforcement learning sandbox with the capability of millions of throughput and tens of thousands of concurrent operations in collaboration with Tencent Cloud, achieving full and stable operation in the test environment. This marks a significant breakthrough in the underlying infrastructure of AI intelligent agents, providing critical support for their large-scale application.
Alibaba's Tongyi Lab has recently undergone restructuring, with the original Qwen team being split, leading to talent turnover. After Lin Jinyang left, Yu Bowen, the former head of the large model pre-training team of Qwen, also joined ByteDance, taking on the role of head of the pre-training team for the Seed team's visual model and multimodal interaction team. This reflects the intensifying competition for talent in the field of large models in China, and the industry structure is undergoing a new round of reshaping.
The Tencent Cloud Intelligent Agent Development Platform will adjust its AI model billing strategy starting March 13, 2026. The key changes include ending the free trial of public test models and optimizing the pricing of self-developed Hunyuan series models. This move marks the maturity stage of Tencent Cloud's AI commercial ecosystem. Among them, three high-performance models, GLM5, MiniMax2.5, and Kimi2.5, will end their limited-time free public testing.
Baidu Cloud launches the DuClaw zero-deployment service, significantly lowering the barrier for AI applications. Users can use high-performance AI assistants directly through a web page without any coding or configuration, completely simplifying the previously tech-dependent deployment process.
Recently, the "OpenClaw AI Agent Shrimp Capability Ranking" has attracted attention in the AI community. This ranking focuses on real-world scenarios and tests the coding task success rate of mainstream large models under the OpenClaw framework through a unified task set, providing developers with reference. The evaluation combines automated code checking with LLM intelligent review to ensure objective, reproducible results with no human intervention.