GLM-5: The New Performance Leader in Open Weights AI
- •GLM-5 becomes the first open weights model to score 50 on Intelligence Index.
- •New architecture scales to 744B parameters using efficient Mixture-of-Experts design and sparse attention.
- •Significant reduction in hallucination rates makes GLM-5 a top-tier performer in agentic tasks.
The landscape of open weights models has shifted with the arrival of GLM-5, a massive 744-billion parameter model from Z AI. By scoring a 50 on the Artificial Analysis Intelligence Index, it has become the first open weights model to cross this threshold, narrowing the performance gap between freely accessible and proprietary models.
Unlike its predecessors, GLM-5 utilizes a sophisticated Mixture-of-Experts architecture where only 40 billion parameters are active at once. This design allows the model to handle complex tasks with greater efficiency. To achieve its leading status, it integrates sparse attention mechanisms, which help the system focus on the most relevant data during processing.
One of the most striking improvements is in "agentic" performance—the ability of an AI to complete multi-step tasks like data analysis or video editing (GDPval-AA). GLM-5 now ranks third overall in this category, trailing only industry leaders like Claude and GPT-5. Additionally, the model has dramatically reduced its tendency to hallucinate by learning to abstain from answering when it lacks sufficient data.
Despite its size, GLM-5 is remarkably efficient, requiring fewer output tokens than previous versions for evaluations. While its massive memory requirement for deployment in BF16 format remains a hurdle, its release under an MIT License is a significant win for open-source AI.