Qwen3-VL-8B-Thinking

Qwen3-VL-8B-Thinking

Qwen/Qwen3-VL-8B-Thinking

About Qwen3-VL-8B-Thinking

Qwen3-VL-8B-Thinking is a vision-language model from the Qwen3 series, optimized for scenarios requiring complex reasoning. In this Thinking mode, the model performs step-by-step thinking and reasoning before providing the final answer.

Available Serverless

Run queries immediately, pay only for usage

$

0.18

/

$

2.0

Per 1M Tokens (input/output)

Metadata

Create on

Oct 15, 2025

License

apache-2.0

Provider

Qwen

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

No

Total Parameters

8

Activated Parameters

8B

Reasoning

No

Precision

FP8

Context length

262K

Max Tokens

262K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?