Baichuan Releases 235B Medical Model Outperforming GPT-5.2
- •Baichuan Intelligent Technology unveils Baichuan-M3, a 235B parameter medical-focused foundation model
- •Model outperforms GPT-5.2 on HealthBench, achieving state-of-the-art results in clinical reasoning
- •Specialized features include proactive patient inquiry and adaptive hallucination suppression for safety
Baichuan Intelligent Technology has unveiled Baichuan-M3, a massive 235-billion parameter model specifically engineered to navigate the high-stakes complexities of clinical environments. Moving beyond the limitations of passive question-answering systems, this medical-enhanced model is designed to simulate the diagnostic workflow of a professional physician. It shifts the paradigm toward active decision support through a specialized training pipeline that models systematic medical inquiry rather than simple text prediction.
One of the model's standout features is proactive information acquisition. Rather than relying solely on the initial prompt, Baichuan-M3 identifies missing variables and asks clarifying questions to resolve patient ambiguity. This is bolstered by long-horizon reasoning, an advanced capability that allows the system to connect disparate symptoms and historical medical data into a single, coherent diagnosis over extended multi-turn interactions.
To mitigate the dangers of false medical advice, the researchers integrated adaptive hallucination suppression, which ensures the model remains factually grounded during consultations. In rigorous empirical evaluations, Baichuan-M3 achieved state-of-the-art results on the HealthBench and ScanBench frameworks, notably outperforming GPT-5.2 in categories of clinical inquiry and safety. The full 235B model and its various quantized versions have been released as open weights on Hugging Face, enabling broader access for medical researchers.