Qwen3.6-35B-A3B

Qwen3.6-35B-A3B

Qwen/Qwen3.6-35B-A3B

About Qwen3.6-35B-A3B

Qwen3.6-35B-A3B is a large language model from Alibaba's Qwen3.6 series, featuring a Mixture of Experts (MoE) architecture with 35 billion total parameters and approximately 3 billion active parameters per inference, delivering strong performance with efficient compute utilization. The model supports both thinking and non-thinking modes, offering flexible switching between rapid response and deep reasoning

Available Serverless

Run queries immediately, pay only for usage

Input Price

$

0.2

/ M Tokens

Output Price

$

1.6

/ M Tokens

Metadata

Create on

License

APACHE-2.0

Provider

Qwen

HuggingFace

Specification

State

Available

Architecture

MoE Causal LM

Calibrated

No

Mixture of Experts

Yes

Total Parameters

35B

Activated Parameters

3B

Reasoning

No

Precision

FP8

Context length

262K

Max Tokens

262K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?