

Model Comparison
GLM-4.1V-9B-Thinking
vs
Qwen2.5-VL-72B-Instruct
Feb 28, 2026

Pricing
Input
$
0.035
/ M Tokens
$
0.59
/ M Tokens
Output
$
0.14
/ M Tokens
$
0.59
/ M Tokens
Metadata
Create on
Jun 28, 2025
Jan 27, 2025
License
MIT
-
Provider
Z.ai
Qwen
Specification
State
Deprecated
Available
Architecture
Vision-Language Model (VLM) based on GLM-4-9B-0414 with thinking paradigm
Vision-Language Model (VLM) with a Streamlined and Efficient Vision Encoder (ViT with window attention, SwiGLU, RMSNorm) aligned with the Qwen2.5 LLM structure. Features include Dynamic Resolution and Frame Rate Training for video understanding, mRoPE for temporal sequence and speed, and YaRN for long text context length extrapolation.
Calibrated
No
No
Mixture of Experts
No
No
Total Parameters
9B
72B
Activated Parameters
9B
72B
Reasoning
No
No
Precision
FP8
FP8
Context length
66K
131K
Max Tokens
66K
4K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Not supported
Not supported
Structured Outputs
Not supported
Not supported
Tools
Not supported
Not supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Not supported
Supported
GLM-4.1V-9B-Thinking in Comparison
See how GLM-4.1V-9B-Thinking compares with other popular models across key dimensions.
VS

Qwen3-VL-32B-Instruct
VS

Qwen3-VL-32B-Thinking
VS

Qwen3-VL-8B-Instruct
VS

Qwen3-VL-8B-Thinking
VS

Qwen3-VL-30B-A3B-Instruct
VS

Qwen3-VL-30B-A3B-Thinking
VS

Qwen3-Omni-30B-A3B-Instruct
VS

Qwen3-Omni-30B-A3B-Captioner
VS

Qwen3-Omni-30B-A3B-Thinking
VS

Qwen3-Next-80B-A3B-Instruct
VS

Qwen3-Next-80B-A3B-Thinking
VS

Ling-mini-2.0
VS

Hunyuan-MT-7B
VS
gpt-oss-20b
VS

Qwen3-Coder-30B-A3B-Instruct
VS

Qwen3-30B-A3B-Thinking-2507
VS

Qwen3-30B-A3B-Instruct-2507
VS

Hunyuan-A13B-Instruct
VS

Qwen3-14B
VS

Qwen3-30B-A3B
