Model Comparison

Qwen2.5-VL-32B-Instruct

vs

Qwen2.5-VL-7B-Instruct

Feb 28, 2026

Pricing

Input

$

0.27

/ M Tokens

$

0.05

/ M Tokens

Output

$

0.27

/ M Tokens

$

0.05

/ M Tokens

Metadata

Create on

Mar 21, 2025

Jan 26, 2025

License

APACHE-2.0

APACHE-2.0

Provider

Qwen

Qwen

Specification

State

Available

Available

Architecture

Vision Transformer (ViT) with window attention, SwiGLU, RMSNorm, and mRoPE, aligned with Qwen2.5 LLM structure

Vision-Language Model (VLM) combining a Vision Transformer (ViT) with window attention, SwiGLU, and RMSNorm, aligned with the Qwen2.5 LLM structure. It utilizes mRoPE for temporal understanding and YaRN for long text context handling.

Calibrated

Yes

No

Mixture of Experts

No

No

Total Parameters

32B

7B

Activated Parameters

32B

7B

Reasoning

No

No

Precision

FP8

FP8

Context length

131K

33K

Max Tokens

131K

4K

Supported Functionality

Serverless

Supported

Supported

Serverless LoRA

Not supported

Not supported

Fine-tuning

Not supported

Not supported

Embeddings

Not supported

Not supported

Rerankers

Not supported

Not supported

Support image input

Not supported

Not supported

JSON Mode

Not supported

Not supported

Structured Outputs

Not supported

Not supported

Tools

Not supported

Not supported

Fim Completion

Not supported

Not supported

Chat Prefix Completion

Supported

Supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow