MiniMax-M2.5
About MiniMax-M2.5
MiniMax-M2.5 is MiniMax's latest large language model, extensively trained with reinforcement learning across hundreds of thousands of complex real-world environments. Built on a 229B-parameter MoE architecture, it achieves SOTA performance in coding, agentic tool use, search, and office work, scoring 80.2% on SWE-Bench Verified with 37% faster inference than M2.1
Available Serverless
Run queries immediately, pay only for usage
$
0.2
/
$
1.0
Per 1M Tokens (input/output)
Metadata
Specification
State
Available
Architecture
Mixture-of-Experts (MoE)
Calibrated
No
Mixture of Experts
Yes
Total Parameters
229B
Activated Parameters
229B
Reasoning
No
Precision
FP8
Context length
197K
Max Tokens
131K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Supported
Structured Outputs
Not supported
Tools
Supported
Fim Completion
Not supported
Chat Prefix Completion
Supported
Compare with Other Models
See how this model stacks up against others.

MiniMaxAI
chat
MiniMax-M2.5
Release on: Feb 15, 2026
Total Context:
197K
Max output:
131K
Input:
$
0.2
/ M Tokens
Output:
$
1.0
/ M Tokens

MiniMaxAI
chat
MiniMax-M2.1
Release on: Dec 23, 2025
Total Context:
197K
Max output:
131K
Input:
$
0.29
/ M Tokens
Output:
$
1.2
/ M Tokens

MiniMaxAI
chat
MiniMax-M2
Release on: Oct 28, 2025
Total Context:
197K
Max output:
131K
Input:
$
0.3
/ M Tokens
Output:
$
1.2
/ M Tokens

MiniMaxAI
chat
MiniMax-M1-80k
Release on: Jun 17, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.55
/ M Tokens
Output:
$
2.2
/ M Tokens
