

Model Comparison
MiniMax-M2
vs
Qwen2.5-72B-Instruct-128K
Jan 13, 2026

Pricing
Input
$
0.3
/ M Tokens
$
0.59
/ M Tokens
Output
$
1.2
/ M Tokens
$
0.59
/ M Tokens
Metadata
Specification
State
Available
Available
Architecture
Calibrated
No
No
Mixture of Experts
Yes
No
Total Parameters
230B
72B
Activated Parameters
10B
Reasoning
No
No
Precision
FP8
FP8
Context length
197K
131K
Max Tokens
131K
4K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Supported
Structured Outputs
Not supported
Not supported
Tools
Supported
Supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Supported
Supported
MiniMax-M2 in Comparison
See how MiniMax-M2 compares with other popular models across key dimensions.
VS

Qwen3-VL-235B-A22B-Instruct
VS

Qwen3-VL-235B-A22B-Thinking
VS

Ring-flash-2.0
VS

Qwen3-Next-80B-A3B-Instruct
VS

Qwen3-Next-80B-A3B-Thinking
VS
gpt-oss-120b
VS

step3
VS

Qwen3-235B-A22B-Thinking-2507
VS

Qwen3-Coder-480B-A35B-Instruct
VS

Qwen3-235B-A22B-Instruct-2507
VS

Qwen2.5-VL-72B-Instruct
VS

Qwen2.5-72B-Instruct
VS

Qwen2.5-72B-Instruct-128K
VS
