
Model Comparison
DeepSeek-V3.2
vs
DeepSeek-V3.2-Exp
Jan 13, 2026
Pricing
Input
$
0.27
/ M Tokens
$
0.27
/ M Tokens
Output
$
0.42
/ M Tokens
$
0.41
/ M Tokens
Metadata
Create on
Dec 1, 2025
Sep 29, 2025
License
MIT
MIT
Provider
DeepSeek
DeepSeek
Specification
State
Available
Available
Architecture
Calibrated
No
No
Mixture of Experts
No
No
Total Parameters
671B
671B
Activated Parameters
671B
Reasoning
No
No
Precision
FP8
FP8
Context length
164K
164K
Max Tokens
164K
164K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Supported
Structured Outputs
Not supported
Not supported
Tools
Supported
Supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Supported
Supported
DeepSeek-V3.2 in Comparison
See how DeepSeek-V3.2 compares with other popular models across key dimensions.
VS

GLM-4.7
VS

GLM-4.7
VS

Kimi-K2-Thinking
VS

MiniMax-M2
VS
DeepSeek-V3.2-Exp
VS

GLM-4.6
VS

Qwen3-VL-235B-A22B-Instruct
VS

Qwen3-VL-235B-A22B-Thinking
VS

Kimi-K2-Instruct-0905
VS

step3
VS

Qwen3-235B-A22B-Thinking-2507
VS

Qwen3-Coder-480B-A35B-Instruct
VS

Qwen3-235B-A22B-Instruct-2507
VS

Kimi-K2-Instruct
VS

Kimi-Dev-72B
VS

MiniMax-M1-80k
VS

ERNIE-4.5-300B-A47B
