Model Comparison

QwQ-32B

vs

Qwen3-Omni-30B-A3B-Instruct

Feb 28, 2026

Pricing

Input

$

0.15

/ M Tokens

$

0.1

/ M Tokens

Output

$

0.58

/ M Tokens

$

0.4

/ M Tokens

Metadata

Create on

Mar 5, 2025

Sep 20, 2025

License

APACHE-2.0

-

Provider

Qwen

Qwen

Specification

State

Available

Available

Architecture

transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias, with 64 layers and GQA (40 Q attention heads, 8 for KV)

natively end-to-end multilingual omni-modal foundation model with MoE-based Thinker-Talker design, AuT pretraining, and multi-codebook design

Calibrated

No

No

Mixture of Experts

No

Yes

Total Parameters

32B

30B

Activated Parameters

32.5B

3B

Reasoning

No

No

Precision

FP8

FP8

Context length

131K

66K

Max Tokens

131K

66K

Supported Functionality

Serverless

Supported

Supported

Serverless LoRA

Not supported

Not supported

Fine-tuning

Not supported

Not supported

Embeddings

Not supported

Not supported

Rerankers

Not supported

Not supported

Support image input

Not supported

Not supported

JSON Mode

Not supported

Supported

Structured Outputs

Not supported

Not supported

Tools

Supported

Supported

Fim Completion

Not supported

Supported

Chat Prefix Completion

Not supported

Supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow