MiniMax M2.5
MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1 to extend into general office work, reaching fluency in generating and operating Word, Excel, and Powerpoint files, context switching between diverse software environments, and working across different agent and human teams. Scoring 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, M2.5 is also more token efficient than previous generations, having been trained to optimize its actions and output through planning.
The date this AI finished learning. It may not know about things that happened after this date.
The types of content this AI can receive, and what it can produce in return.
The maximum amount of text the AI can read and process in a single request. A larger number means it can handle longer documents or conversations.
The cost of using this AI directly in your own application. Shown in USD per 1 million units of text (tokens).