GLM-4-32B-0414
About GLM-4-32B-0414
GLM-4-32B-0414 is a new generation model in the GLM family with 32 billion parameters. Its performance is comparable to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 was pre-trained on 15T of high-quality data, including a large amount of reasoning-type synthetic data, laying the foundation for subsequent reinforcement learning extensions. In the post-training stage, in addition to human preference alignment for dialogue scenarios, the team enhanced the model's performance in instruction following, engineering code, and function calling using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 achieves good results in areas such as engineering code, Artifact generation, function calling, search-based Q&A, and report generation. On several benchmarks, its performance approaches or even exceeds that of larger models like GPT-4o and DeepSeek-V3-0324 (671B)
Available Serverless
Run queries immediately, pay only for usage
$
0.27
/
$
0.27
Per 1M Tokens (input/output)
Metadata
Specification
State
Available
Architecture
Calibrated
Yes
Mixture of Experts
No
Total Parameters
32
Activated Parameters
32B
Reasoning
No
Precision
FP8
Context length
33K
Max Tokens
33K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Supported
Structured Outputs
Not supported
Tools
Supported
Fim Completion
Not supported
Chat Prefix Completion
Not supported
Compare with Other Models
See how this model stacks up against others.

MiniMaxAI
chat
MiniMax-M2
Release on: Oct 28, 2025
Total Context:
197K
Max output:
131K
Input:
$
0.3
/ M Tokens
Output:
$
1.2
/ M Tokens
DeepSeek
chat
DeepSeek-V3.2-Exp
Release on: Oct 10, 2025
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
0.41
/ M Tokens
DeepSeek
chat
DeepSeek-V3.1-Terminus
Release on: Sep 29, 2025
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
1.0
/ M Tokens
DeepSeek
chat
DeepSeek-V3.1
Release on: Aug 25, 2025
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
1.0
/ M Tokens

Qwen
chat
Qwen3-VL-32B-Instruct
Release on: Oct 21, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.2
/ M Tokens
Output:
$
0.6
/ M Tokens

Qwen
chat
Qwen3-VL-32B-Thinking
Release on: Oct 21, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.2
/ M Tokens
Output:
$
1.5
/ M Tokens

Qwen
chat
Qwen3-VL-8B-Instruct
Release on: Oct 15, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.18
/ M Tokens
Output:
$
0.68
/ M Tokens

Qwen
chat
Qwen3-VL-8B-Thinking
Release on: Oct 15, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.18
/ M Tokens
Output:
$
2.0
/ M Tokens

Qwen
chat
Qwen3-VL-235B-A22B-Instruct
Release on: Oct 4, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.3
/ M Tokens
Output:
$
1.5
/ M Tokens
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.

