Recently, Liquid AI officially launched a new member of its Liquid Foundation Model series - LFM2.5-1.2B-Thinking. This reasoning model with 1.2 billion parameters has made significant breakthroughs in deployment on edge devices. Notably, the model runs on modern smartphones with an approximate memory usage of 900MB, meaning that reasoning capabilities previously requiring data center support can now be fully run offline on personal mobile devices.

image.png

Different from general models focused on daily conversations, LFM2.5-1.2B-Thinking is designed for complex logical reasoning, mathematical operations, and tool calls. It generates internal "thinking traces" before producing the final answer, which is similar to the human problem-solving process, effectively planning steps and verifying intermediate results, significantly improving the accuracy of handling multi-step instructions.

In terms of actual performance, the model demonstrates high efficiency. Its decoding speed on AMD CPU reaches 239 characters per second, and its running speed on mobile NPU also reaches 82 characters per second. By introducing advanced training techniques such as multi-stage reinforcement learning (RLVR), the development team successfully reduced the common "loop hang" rate in reasoning models from 15.74% to 0.36%, ensuring smooth and stable user experience on edge devices.