
Model Comparison
DeepSeek-V4-Flash
vs
GLM-4.6V
May 11, 2026

Pricing
Input
$
0.14
/ M Tokens
$
0.3
/ M Tokens
Output
$
0.28
/ M Tokens
$
0.9
/ M Tokens
Metadata
Create on
Apr 22, 2026
Dec 7, 2025
License
MIT
MIT
Provider
DeepSeek
Z.ai
Specification
State
Available
Available
Architecture
MoE
Multimodal MoE
Calibrated
Yes
Yes
Mixture of Experts
Yes
Yes
Total Parameters
158B
106B
Activated Parameters
13B
106B
Reasoning
No
No
Precision
FP8
FP8
Context length
1049K
131K
Max Tokens
393K
131K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Not supported
Structured Outputs
Not supported
Not supported
Tools
Supported
Supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Supported
Not supported
DeepSeek-V4-Flash in Comparison
See how DeepSeek-V4-Flash compares with other popular models across key dimensions.




