Kimi K2.5 - Everything you need to know January 28, 2026
- •Moonshot releases Kimi K2.5, a 1-trillion parameter open weights model with native multimodal support.
- •Model achieves elite status on agentic benchmarks, outperforming DeepSeek V3.2 and GLM-4.7.
- •Features 32B active parameters via Mixture-of-Experts architecture, significantly reducing hallucination rates.
Moonshot’s latest release, Kimi K2.5, marks a pivotal moment for the open weights community by narrowing the performance gap with proprietary giants like OpenAI and Anthropic. As a Mixture-of-Experts (MoE) model, it boasts a staggering 1 trillion total parameters, though it operates efficiently by only activating 32 billion parameters for any given task. This sparse architecture allows the model to handle diverse and complex reasoning without the massive computational overhead typically associated with such massive systems.
What truly sets K2.5 apart is its native multimodality, a first for Moonshot’s flagship series. By supporting both image and video inputs directly, it removes a significant hurdle for developers who previously had to rely on closed-source alternatives for sophisticated visual reasoning. In benchmark testing, K2.5 demonstrated visual capabilities nearly on par with frontier models, signaling that open weights models are no longer secondary citizens in the multimodal landscape.
The model shines particularly bright in agentic loops, where AI performs multi-step knowledge work like web browsing and data analysis autonomously. Through its specialized reasoning mode, K2.5 achieved a high Elo rating on the GDPval-AA leaderboard, proving its reliability in executing complex, real-world instructions. Furthermore, Moonshot has successfully lowered the hallucination rate—the frequency at which AI makes up facts—by training the model to abstain from answering when it lacks certain knowledge, rather than fabricating plausible-sounding information.