
Model Comparison
DeepSeek-R1-Distill-Qwen-14B
vs
Qwen2.5-VL-7B-Instruct
Feb 10, 2026

Pricing
Input
$
0.1
/ M Tokens
$
0.05
/ M Tokens
Output
$
0.1
/ M Tokens
$
0.05
/ M Tokens
Metadata
Create on
Jan 20, 2025
Jan 26, 2025
License
MIT LICENSE
APACHE-2.0
Provider
DeepSeek
Qwen
Specification
State
Available
Available
Architecture
Dense
Vision-Language Model (VLM) combining a Vision Transformer (ViT) with window attention, SwiGLU, and RMSNorm, aligned with the Qwen2.5 LLM structure. It utilizes mRoPE for temporal understanding and YaRN for long text context handling.
Calibrated
No
No
Mixture of Experts
No
No
Total Parameters
14B
7B
Activated Parameters
14B
7B
Reasoning
No
No
Precision
FP8
FP8
Context length
131K
33K
Max Tokens
131K
4K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Not supported
Structured Outputs
Not supported
Not supported
Tools
Supported
Not supported
Fim Completion
Supported
Not supported
Chat Prefix Completion
Not supported
Supported
DeepSeek-R1-Distill-Qwen-14B in Comparison
See how DeepSeek-R1-Distill-Qwen-14B compares with other popular models across key dimensions.
VS

Qwen3-VL-32B-Instruct
VS

Qwen3-VL-32B-Thinking
VS

Qwen3-VL-8B-Instruct
VS

Qwen3-VL-8B-Thinking
VS

Qwen3-VL-30B-A3B-Instruct
VS

Qwen3-VL-30B-A3B-Thinking
VS

Qwen3-Omni-30B-A3B-Instruct
VS

Qwen3-Omni-30B-A3B-Captioner
VS

Qwen3-Omni-30B-A3B-Thinking
VS

Qwen3-Next-80B-A3B-Instruct
VS

Qwen3-Next-80B-A3B-Thinking
VS

Ling-mini-2.0
VS

Hunyuan-MT-7B
VS
gpt-oss-20b
VS

Qwen3-Coder-30B-A3B-Instruct
VS

Qwen3-30B-A3B-Thinking-2507
VS

Qwen3-30B-A3B-Instruct-2507
VS

GLM-4.1V-9B-Thinking
VS

Hunyuan-A13B-Instruct
VS

Qwen3-14B
