GLM-5

GLM-5

zai-org/GLM-5

About GLM-5

GLM-5 is a next-generation open-source model for complex systems engineering and long-horizon agentic tasks, scaled to ~744B sparse parameters (~40B active) with ~28.5T pretraining tokens. It integrates DeepSeek Sparse Attention (DSA) to retain long-context capacity while reducing inference cost, and leverages the “slime” asynchronous RL stack to deliver strong performance in reasoning, coding, and agentic benchmarks.

Available Serverless

Run queries immediately, pay only for usage

$

1.0

/

$

3.2

Per 1M Tokens (input/output)

Metadata

Create on

Feb 12, 2026

License

MIT

Provider

Z.ai

HuggingFace

Specification

State

Available

Architecture

Mixture of Experts (MoE) with DeepSeek Sparse Attention (DSA) and asynchronous RL stack

Calibrated

No

Mixture of Experts

Yes

Total Parameters

750B

Activated Parameters

40B

Reasoning

No

Precision

FP8

Context length

205K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Not supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow