On Christmas Day, the renowned edge AI startup Liquid AI officially released its latest experimental model, LFM2-2.6B-Exp. This small open-source model with only 2.6B (26 billion) parameters performed outstandingly in multiple key benchmark tests, especially excelling in instruction following, surpassing DeepSeek R1-0528, which has hundreds of billions of parameters. It has sparked widespread discussion in the industry and is hailed as the "strongest 3B model."
Model Background: Experimental Breakthrough Driven by Pure Reinforcement Learning
LFM2-2.6B-Exp is based on the 2.6B foundation model of Liquid AI's second-generation Liquid Foundation Models (LFM2) series. It was optimized through pure reinforcement learning (RL), without requiring supervised fine-tuning warm-up or large teacher model distillation. The model inherits the advantages of LFM2's hybrid architecture, combining short-range gated convolution and grouped query attention (GQA), supporting a 32K context length, and is designed for edge devices such as smartphones, laptops, and IoT devices, achieving efficient local deployment.
Liquid AI emphasized that this experimental checkpoint mainly focuses on optimizing instruction following, knowledge question answering, and mathematical reasoning, suitable for agent workflows, RAG retrieval, data extraction, creative writing, and multi-turn dialogues.

Performance Highlights: Big Power from a Small Size
In the latest benchmark tests, LFM2-2.6B-Exp showed amazing performance:
- IFBench (Instruction Following Benchmark): Scored significantly ahead of similar models, even surpassing DeepSeek R1-0528, which has 263 times more parameters.
- GPQA (Graduate-Level Knowledge Question Answering): Reached approximately 42%, far exceeding traditional 3B models.
- IFEval (Strict Instruction Following): Exceeded 88%, defeating many models with over 10B parameters.
- GSM8K (Mathematical Reasoning): Scored above 82%, outperforming Llama3.23B and Gemma3 series.
Additionally, the model's prefilling and decoding speed on CPU is twice that of competitors, with extremely low memory usage and support for bfloat16 quantization, truly achieving "PhD-level reasoning on a smartphone."
Open Source Significance: Accelerating the Popularization of Edge AI
LFM2-2.6B-Exp is fully open source, with model weights uploaded to the Hugging Face platform, allowing developers to freely download and integrate it into local applications. This not only demonstrates the huge potential of reinforcement learning on small models but also further promotes the development of the edge AI ecosystem, making high-performance AI accessible from the cloud to every device.
AIbase Comment: The release of LFM2-2.6B-Exp marks the acceleration of the era of small models: no need for massive parameters, advanced performance can be achieved through intelligent training paradigms. For developers and enterprises pursuing privacy, low latency, and low cost, this model is undoubtedly one of the best choices at present. In the future, as RL technology and hybrid architectures continue to evolve, 3B open-source models may approach AGI levels and run smoothly on any device. Interested readers can immediately go to Hugging Face to download and experience it, opening a new chapter in edge intelligence.
Address: https://huggingface.co/LiquidAI/LFM2-2.6B-Exp
