
Model Comparison
DeepSeek-V3.1
vs
GLM-4.5V
Feb 15, 2026

Pricing
Input
$
0.27
/ M Tokens
$
0.14
/ M Tokens
Output
$
1.0
/ M Tokens
$
0.86
/ M Tokens
Metadata
Create on
Aug 21, 2025
Aug 10, 2025
License
MIT LICENSE
MIT
Provider
DeepSeek
Z.ai
Specification
State
Available
Available
Architecture
Mixture of Experts
GLM-V family, based on GLM-4.5-Air, incorporates Chain-of-Thought reasoning, RLCS (Reinforcement Learning with Curriculum Sampling), and a Thinking Mode switch, with Mixture of Experts (MoE)
Calibrated
Yes
Yes
Mixture of Experts
Yes
Yes
Total Parameters
671B
106B
Activated Parameters
37B
12B
Reasoning
No
No
Precision
FP8
FP8
Context length
164K
66K
Max Tokens
164K
66K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Not supported
Structured Outputs
Not supported
Not supported
Tools
Supported
Supported
Fim Completion
Supported
Not supported
Chat Prefix Completion
Supported
Not supported
DeepSeek-V3.1 in Comparison
See how DeepSeek-V3.1 compares with other popular models across key dimensions.
VS

MiniMax-M2.5
VS

GLM-5
VS

Step-3.5-Flash
VS

GLM-4.7
VS

MiniMax-M2.1
VS

GLM-4.6V
VS
DeepSeek-V3.2
VS

DeepSeek-V3.1-Nex-N1
VS

Kimi-K2-Thinking
VS

MiniMax-M2
VS
DeepSeek-V3.2-Exp
VS

GLM-4.6
VS
DeepSeek-V3.1-Terminus
VS

Qwen3-VL-235B-A22B-Instruct
VS

Qwen3-VL-235B-A22B-Thinking
VS

Ring-flash-2.0
VS

Ling-flash-2.0
VS

Kimi-K2-Instruct-0905
VS

GLM-4.5V
VS
gpt-oss-120b
