
Model Comparison
deepseek-vl2
vs
Qwen3-Next-80B-A3B-Instruct
Feb 15, 2026

Pricing
Input
$
0.15
/ M Tokens
$
0.14
/ M Tokens
Output
$
0.15
/ M Tokens
$
1.4
/ M Tokens
Metadata
Create on
Dec 13, 2024
Sep 9, 2025
License
DEEPSEEK MODEL LICENSE
APACHE-2.0
Provider
DeepSeek
Qwen
Specification
State
Available
Available
Architecture
sparse-activated MoE
Qwen3-Next architecture featuring Hybrid Attention (Gated DeltaNet and Gated Attention), High-Sparsity Mixture-of-Experts (MoE), Stability Optimizations, and Multi-Token Prediction (MTP)
Calibrated
No
No
Mixture of Experts
Yes
Yes
Total Parameters
27B
80B
Activated Parameters
4.5B
3B
Reasoning
No
No
Precision
FP8
FP8
Context length
4K
262K
Max Tokens
4K
262K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Supported
Structured Outputs
Not supported
Not supported
Tools
Not supported
Supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Supported
Supported
deepseek-vl2 in Comparison
See how deepseek-vl2 compares with other popular models across key dimensions.
VS

Qwen3-VL-32B-Instruct
VS

Qwen3-VL-32B-Thinking
VS

Qwen3-VL-30B-A3B-Instruct
VS

Qwen3-VL-30B-A3B-Thinking
VS

Qwen3-Omni-30B-A3B-Instruct
VS

Qwen3-Omni-30B-A3B-Captioner
VS

Qwen3-Omni-30B-A3B-Thinking
VS

Qwen3-Next-80B-A3B-Instruct
VS

Qwen3-Next-80B-A3B-Thinking
VS

Ling-mini-2.0
VS
gpt-oss-120b
VS
gpt-oss-20b
VS

Qwen3-Coder-30B-A3B-Instruct
VS

Qwen3-30B-A3B-Thinking-2507
VS

Qwen3-30B-A3B-Instruct-2507
VS

Hunyuan-A13B-Instruct
VS

Qwen3-14B
VS

Qwen3-30B-A3B
VS

Qwen3-32B
VS

GLM-Z1-32B-0414
