Model Comparison

GLM-4.7

vs

MiniMax-M2.5

Feb 15, 2026

Pricing

Input

$

0.42

/ M Tokens

$

0.3

/ M Tokens

Output

$

2.2

/ M Tokens

$

1.2

/ M Tokens

Metadata

Create on

Dec 22, 2025

Feb 12, 2026

License

MIT

MODIFIED-MIT

Provider

Z.ai

MiniMaxAI

Specification

State

Available

Available

Architecture

GLM-4 (Mixture of Experts)

Mixture-of-Experts (MoE)

Calibrated

Yes

No

Mixture of Experts

Yes

Yes

Total Parameters

355B

229B

Activated Parameters

32B

229B

Reasoning

No

No

Precision

FP8

FP8

Context length

205K

197K

Max Tokens

205K

131K

Supported Functionality

Serverless

Supported

Supported

Serverless LoRA

Not supported

Not supported

Fine-tuning

Not supported

Not supported

Embeddings

Not supported

Not supported

Rerankers

Not supported

Not supported

Support image input

Not supported

Not supported

JSON Mode

Not supported

Supported

Structured Outputs

Not supported

Not supported

Tools

Supported

Supported

Fim Completion

Not supported

Not supported

Chat Prefix Completion

Not supported

Supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow