

Model Comparison
GLM-4.6V
vs
Ring-flash-2.0
Feb 28, 2026

Pricing
Input
$
0.3
/ M Tokens
$
0.14
/ M Tokens
Output
$
0.9
/ M Tokens
$
0.57
/ M Tokens
Metadata
Create on
Dec 7, 2025
Sep 19, 2025
License
MIT
MIT LICENSE
Provider
Z.ai
inclusionAI
Specification
State
Available
Available
Architecture
Multimodal with Function Calling, Mixture of Experts (MoE)
Mixture-of-Experts (MoE) with 1/32 expert activation ratio and MTP layers, featuring low activation and high sparsity design
Calibrated
Yes
Yes
Mixture of Experts
Yes
Yes
Total Parameters
106B
100B
Activated Parameters
106B
6.1B
Reasoning
No
No
Precision
FP8
FP8
Context length
131K
131K
Max Tokens
131K
131K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Not supported
Not supported
Structured Outputs
Not supported
Not supported
Tools
Supported
Not supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Not supported
Supported
GLM-4.6V in Comparison
See how GLM-4.6V compares with other popular models across key dimensions.
VS

MiniMax-M2.5
VS

GLM-5
VS

Step-3.5-Flash
VS

GLM-4.7
VS

MiniMax-M2.1
VS

MiniMax-M2
VS

Qwen3-VL-32B-Instruct
VS

Qwen3-VL-32B-Thinking
VS

Qwen3-VL-30B-A3B-Instruct
VS

Qwen3-VL-30B-A3B-Thinking
VS

Qwen3-VL-235B-A22B-Instruct
VS

Qwen3-VL-235B-A22B-Thinking
VS

Qwen3-Omni-30B-A3B-Instruct
VS

Ring-flash-2.0
VS

Ling-flash-2.0
VS

Qwen3-Omni-30B-A3B-Captioner
VS

Qwen3-Omni-30B-A3B-Thinking
VS

Qwen3-Next-80B-A3B-Instruct
VS

Qwen3-Next-80B-A3B-Thinking
VS
gpt-oss-120b
