

Model Comparison
MiniMax-M2
vs
step3
Feb 28, 2026

Pricing
Input
$
0.3
/ M Tokens
$
0.57
/ M Tokens
Output
$
1.2
/ M Tokens
$
1.42
/ M Tokens
Metadata
Create on
Oct 22, 2025
Jul 28, 2025
License
MIT
APACHE LICENSE (VERSION 2.0)
Provider
MiniMaxAI
StepFun
Specification
State
Deprecated
Deprecated
Architecture
Mixture of Experts
Mixture-of-Experts (MoE) architecture with Multi-Matrix Factorization Attention (MFA) and Attention-FFN Disaggregation (AFD)
Calibrated
No
No
Mixture of Experts
Yes
Yes
Total Parameters
230B
321B
Activated Parameters
10B
38B
Reasoning
No
No
Precision
FP8
FP8
Context length
197K
66K
Max Tokens
131K
66K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Supported
Structured Outputs
Not supported
Not supported
Tools
Supported
Supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Supported
Not supported
MiniMax-M2 in Comparison
See how MiniMax-M2 compares with other popular models across key dimensions.
VS

MiniMax-M2.5
VS

Step-3.5-Flash
VS

MiniMax-M2.1
VS

Qwen3-VL-235B-A22B-Instruct
VS

Qwen3-VL-235B-A22B-Thinking
VS

Ring-flash-2.0
VS

Qwen3-Next-80B-A3B-Instruct
VS

Qwen3-Next-80B-A3B-Thinking
VS
gpt-oss-120b
VS

step3
VS

Qwen3-235B-A22B-Thinking-2507
VS

Qwen3-Coder-480B-A35B-Instruct
VS

Qwen3-235B-A22B-Instruct-2507
VS

Qwen2.5-VL-72B-Instruct
VS

Qwen2.5-72B-Instruct
VS

Qwen2.5-72B-Instruct-128K
VS
VS
