India Launches Sarvam 105B: New Open-Weights Models Debut
- •SarvamAI releases Sarvam 105B and 30B models trained from scratch in India
- •New models support reasoning and non-reasoning modes with Apache 2.0 open-source licensing
- •Benchmarks show strong agentic capabilities despite trailing top-tier reasoning models
The landscape of open-weights artificial intelligence is becoming increasingly global. SarvamAI has officially entered the race, announcing its Sarvam 105B and Sarvam 30B models at the India AI Impact Summit 2026. These models are notable for being trained entirely within India using local compute resources, marking a significant step toward developing sovereign AI infrastructure.
Both models utilize a Mixture-of-Experts (MoE) architecture, which improves efficiency by activating only a fraction of the model's parameters for each request rather than the entire system. Sarvam 105B features a substantial 128K context window, allowing it to "read" and process massive amounts of information at once. While they trail the industry's highest-performing reasoning models in raw benchmarks, Sarvam 105B demonstrates impressive relative strength in agentic tasks—those where an AI is tasked with completing complex, multi-step goals autonomously.
The release also signals a commitment to accessibility and collaboration. Under an Apache 2.0 license, these models are freely available on platforms like HuggingFace and AIKosh. This "open-weights" approach ensures that researchers and developers worldwide can inspect, adapt, and build upon Sarvam’s work, fostering a more inclusive AI ecosystem that extends beyond the current dominance of models from the United States and China.