Qwen3-Omni-30B-A3B-Instruct

Qwen3-Omni-30B-A3B-Instruct

Qwen/Qwen3-Omni-30B-A3B-Instruct

About Qwen3-Omni-30B-A3B-Instruct

Qwen3-Omni-30B-A3B-Instruct is a member of the latest Qwen3 series from Alibaba's Qwen team. It is a Mixture of Experts (MoE) model with 30 billion total parameters and 3 billion active parameters, which effectively reduces inference costs while maintaining powerful performance. The model was trained on high-quality, multi-source, and multilingual data, demonstrating excellent performance in basic capabilities such as multilingual dialogue, as well as in code, math

Available Serverless

Run queries immediately, pay only for usage

$

0.1

/

$

0.4

Per 1M Tokens (input/output)

Metadata

Create on

Oct 4, 2025

License

-

Provider

Qwen

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

Yes

Total Parameters

30B

Activated Parameters

3B

Reasoning

No

Precision

FP8

Context length

66K

Max Tokens

66K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Supported

Chat Prefix Completion

Supported

SiliconFlow Service

Comprehensive solutions to deploy and scale your AI applications with maximum flexibility

60%

lower latency

2x

higher throughput

65%

cost savings

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?