

Model Comparison
GLM-5
vs
MiniMax-M2.1
Feb 15, 2026

Pricing
Input
$
0.3
/ M Tokens
$
0.29
/ M Tokens
Output
$
2.55
/ M Tokens
$
1.2
/ M Tokens
Metadata
Create on
Feb 11, 2026
Dec 20, 2025
License
MIT
MODIFIED-MIT
Provider
Z.ai
MiniMaxAI
Specification
State
Available
Available
Architecture
Mixture of Experts (MoE) with DeepSeek Sparse Attention (DSA) and asynchronous RL stack
null
Calibrated
No
No
Mixture of Experts
Yes
No
Total Parameters
750B
230B
Activated Parameters
40B
230B
Reasoning
No
No
Precision
FP8
FP8
Context length
205K
197K
Max Tokens
131K
131K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Not supported
Supported
Structured Outputs
Not supported
Not supported
Tools
Supported
Supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Not supported
Supported
GLM-5 in Comparison
See how GLM-5 compares with other popular models across key dimensions.



