StepFun Launches PaCoRe Framework for Parallel AI Reasoning
- •StepFun introduced PaCoRe, a reasoning framework that scales test-time compute through massive parallel exploration and a message-passing architecture.
- •The PaCoRe-8B model achieved a 94.5% score on the HMMT 2025 math benchmark, surpassing the 93.2% score recorded by GPT-5.
- •StepFun has open-sourced the model checkpoints, specialized training datasets, and full inference pipeline on GitHub under an MIT license.
Jingcheng Hu, a researcher at StepFun, and his team have introduced Parallel Coordinated Reasoning (PaCoRe), a framework that shifts AI reasoning from sequential chains to massive parallel exploration. Current language models often hit performance ceilings due to the context window limits of sequential processing. PaCoRe bypasses this by launching multiple parallel trajectories using a message-passing architecture. This system compacts findings into bounded messages that guide subsequent exploration, scaling test-time compute (TTC) to millions of tokens without exceeding memory constraints.
Research shows that breadth can be more impactful than depth in complex problem-solving. By scaling effective TTC to two million tokens, the 8B parameter model achieved 94.5% on the HMMT 2025 mathematics benchmark. This notably outperforms GPT-5, which scored 93.2%, highlighting the efficiency of optimized inference scaling. This demonstrates that smaller models can achieve top-tier performance by strategically distributing computational resources during the reasoning phase.
To support community development, StepFun has open-sourced the PaCoRe-8B checkpoints, training corpus, and inference code under an MIT license. This release provides a reproducible path for scaling reasoning capabilities in smaller models. The full pipeline is now available on GitHub, facilitating broader experimentation with parallel coordination. By democratizing these tools, StepFun aims to accelerate the evolution of high-performance reasoning across the AI landscape.