GLM-4.6V

GLM-4.6V

zai-org/GLM-4.6V

About GLM-4.6V

GLM-4.6V achieves SOTA (State-of-the-Art) accuracy in visual understanding among models of the same parameter scale. For the first time, it natively integrates Function Call capabilities into the visual model architecture, bridging the gap between "Visual Perception" and "Executable Action." This provides a unified technical foundation for multimodal Agents in real-world business scenarios. Additionally, the visual context window has been expanded to 128k, supporting long video stream processing and high-resolution multi-image analysis.

Available Serverless

Run queries immediately, pay only for usage

$

0.3

/

$

0.9

Per 1M Tokens (input/output)

Metadata

Create on

Dec 8, 2025

License

MIT

Provider

Z.ai

HuggingFace

Specification

State

Available

Architecture

Calibrated

Yes

Mixture of Experts

Yes

Total Parameters

106B

Activated Parameters

106B

Reasoning

No

Precision

FP8

Context length

131K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Supported

JSON Mode

Not supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

SiliconFlow Service

Comprehensive solutions to deploy and scale your AI applications with maximum flexibility

60%

lower latency

2x

higher throughput

65%

cost savings

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?