Qwen3.5-122B-A10B

Qwen3.5-122B-A10B

Qwen/Qwen3.5-122B-A10B

About Qwen3.5-122B-A10B

Qwen3.5-122B-A10B is a native multimodal large language model from the Qwen team, with 122B total parameters and only 10B activated. It features an efficient hybrid architecture combining Gated Delta Networks with sparse Mixture-of-Experts (MoE), natively supporting a 256K context length extensible up to ~1M tokens. Through early fusion training, it achieves unified vision-language capabilities supporting text, image, and video understanding, with strong performance across knowledge, reasoning, coding, agents, visual understanding, and multilingual benchmarks, surpassing GPT-5-mini and Qwen3-235B-A22B on multiple metrics. It defaults to thinking mode, supports tool calling, and covers 201 languages and dialects

Available Serverless

Run queries immediately, pay only for usage

Input Price

$

0.26

/ M Tokens

Output Price

$

2.08

/ M Tokens

Metadata

Create on

License

APACHE-2.0

Provider

Qwen

Specification

State

Available

Architecture

Hybrid Sparse MoE

Calibrated

Yes

Mixture of Experts

Yes

Total Parameters

122B

Activated Parameters

10B

Reasoning

No

Precision

FP8

Context length

262K

Max Tokens

262K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?