Korea's Flagship AI, Completed with Proprietary Technology: K-EXAONE
- •LG AI Research unveils K-EXAONE, a 236B parameter MoE model ranking 7th globally among open-weight models.
- •Proprietary AGAPO reinforcement learning and Multi-Token Prediction deliver 1.5x faster inference and superior reasoning.
- •The model supports a 260,000-token context window and specialized SuperBPE tokenizer for high-efficiency processing.
LG AI Research has officially entered the global "trillion-parameter era" race with K-EXAONE, a sovereign AI model designed to establish Korea’s independence in the high-stakes LLM market. At its core, K-EXAONE utilizes a Mixture-of-Experts (MoE) architecture where only 10% of its 236 billion parameters are activated at once, allowing the model to balance massive intelligence with manageable computing costs. This efficiency is further bolstered by a custom "SuperBPE" tokenizer, which condenses complex word combinations into single tokens, significantly speeding up the way the AI reads and processes data. What truly sets this model apart is its sophisticated training pipeline, including a self-designed reinforcement learning algorithm called AGAPO. Unlike standard methods that discard errors, AGAPO learns from "incorrect" paths by assigning negative rewards, effectively teaching the AI what not to do during complex logical reasoning. These technical refinements have propelled K-EXAONE to the top of domestic leaderboards, outperforming several global open-weight models in mathematics and coding benchmarks. By integrating a 260,000-token context window—large enough to process entire books in one go (context length)—LG is positioning K-EXAONE as a versatile engine for the upcoming "Agentic AI" era, where models must use digital tools and APIs to solve real-world problems.