deepseek-vl2
About deepseek-vl2
DeepSeek-VL2 is a mixed-expert (MoE) vision-language model developed based on DeepSeekMoE-27B, employing a sparse-activated MoE architecture to achieve superior performance with only 4.5B active parameters. The model excels in various tasks including visual question answering, optical character recognition, document/table/chart understanding, and visual grounding. Compared to existing open-source dense models and MoE-based models, it demonstrates competitive or state-of-the-art performance using the same or fewer active parameters.
Available Serverless
Run queries immediately, pay only for usage
$
0.15
/
$
0.15
Per 1M Tokens (input/output)
Metadata
Specification
State
Available
Architecture
Calibrated
No
Mixture of Experts
Yes
Total Parameters
27B
Activated Parameters
4.5B
Reasoning
No
Precision
FP8
Context length
4K
Max Tokens
4K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Supported
JSON Mode
Supported
Structured Outputs
Not supported
Tools
Not supported
Fim Completion
Not supported
Chat Prefix Completion
Supported
SiliconFlow Service
Comprehensive solutions to deploy and scale your AI applications with maximum flexibility
60%
lower latency
2x
higher throughput
65%
cost savings
Compare with Other Models
See how this model stacks up against others.
DeepSeek
chat
DeepSeek-V3.2-Exp
Release on: Oct 10, 2025
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
0.41
/ M Tokens
DeepSeek
chat
DeepSeek-V3.1-Terminus
Release on: Sep 29, 2025
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
1.0
/ M Tokens
DeepSeek
chat
DeepSeek-V3.1
Release on: Aug 25, 2025
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
1.0
/ M Tokens
DeepSeek
chat
DeepSeek-V3
Release on: Dec 26, 2024
Total Context:
164K
Max output:
164K
Input:
$
0.25
/ M Tokens
Output:
$
1.0
/ M Tokens
DeepSeek
chat
DeepSeek-R1
Release on: May 28, 2025
Total Context:
164K
Max output:
164K
Input:
$
0.5
/ M Tokens
Output:
$
2.18
/ M Tokens
DeepSeek
chat
DeepSeek-R1-Distill-Qwen-32B
Release on: Jan 20, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.18
/ M Tokens
Output:
$
0.18
/ M Tokens
DeepSeek
chat
DeepSeek-R1-Distill-Qwen-14B
Release on: Jan 20, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.1
/ M Tokens
Output:
$
0.1
/ M Tokens
DeepSeek
chat
DeepSeek-R1-Distill-Qwen-7B
Release on: Jan 20, 2025
Total Context:
33K
Max output:
16K
Input:
$
0.05
/ M Tokens
Output:
$
0.05
/ M Tokens
DeepSeek
chat
deepseek-vl2
Release on: Dec 13, 2024
Total Context:
4K
Max output:
4K
Input:
$
0.15
/ M Tokens
Output:
$
0.15
/ M Tokens
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
