

Model Comparison
GLM-4-32B-0414
vs
Jan 13, 2026
Pricing
Input
$
0.27
/ M Tokens
$
/ M Tokens
Output
$
0.27
/ M Tokens
$
/ M Tokens
Metadata
Specification
State
Available
Architecture
Calibrated
Yes
Yes
Mixture of Experts
No
Yes
Total Parameters
32B
Activated Parameters
32B
Reasoning
No
Yes
Precision
FP8
Context length
33K
Max Tokens
33K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Supported
Fine-tuning
Not supported
Supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Supported
Structured Outputs
Not supported
Supported
Tools
Supported
Supported
Fim Completion
Not supported
Supported
Chat Prefix Completion
Not supported
Supported
GLM-4-32B-0414 in Comparison
See how GLM-4-32B-0414 compares with other popular models across key dimensions.
VS

GLM-4.6V
VS

Qwen3-VL-32B-Instruct
VS

Qwen3-VL-32B-Thinking
VS

Qwen3-VL-8B-Instruct
VS

Qwen3-VL-8B-Thinking
VS

Qwen3-VL-30B-A3B-Instruct
VS

Qwen3-VL-30B-A3B-Thinking
VS

Qwen3-Omni-30B-A3B-Instruct
VS

Ring-flash-2.0
VS

Ling-flash-2.0
VS

Qwen3-Omni-30B-A3B-Captioner
VS

Qwen3-Omni-30B-A3B-Thinking
VS

Qwen3-Next-80B-A3B-Instruct
VS

Qwen3-Next-80B-A3B-Thinking
VS

Ling-mini-2.0
VS

Hunyuan-MT-7B
VS
gpt-oss-120b
VS
gpt-oss-20b
VS

Qwen3-Coder-30B-A3B-Instruct
VS

Qwen3-30B-A3B-Thinking-2507
