
Qwen3-30B-A3B API, Deployment, Pricing
Qwen/Qwen3-30B-A3B
Qwen3-30B-A3B is the latest large language model in the Qwen series, featuring a Mixture-of-Experts (MoE) architecture with 30.5B total parameters and 3.3B activated parameters. This model uniquely supports seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue). It demonstrates significantly enhanced reasoning capabilities, superior human preference alignment in creative writing, role-playing, and multi-turn dialogues. The model excels in agent capabilities for precise integration with external tools and supports over 100 languages and dialects with strong multilingual instruction following and translation capabilities
Details
Model Provider
Qwen3
Type
text
Sub Type
chat
Size
MOE
Publish Time
Apr 30, 2025
Input Price
$
0.09
/ M Tokens
Output Price
$
0.45
/ M Tokens
Context length
131K
Tags
Reasoning,MoE,30B,131K
Compare with Other Models
See how this model stacks up against others.
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
What is the Qwen3-30B-A3B model, and what are its core capabilities and technical specifications?
In which business scenarios does Qwen3-30B-A3B perform well? Which industries or applications is it suitable for?
How can the performance and effectiveness of Qwen3-30B-A3B be optimized in actual business use?
Compared with other models, when should Qwen3-30B-A3B be selected?
What are SiliconFlow's key strengths in AI serverless deployment for Qwen3-30B-A3B?
What makes SiliconFlow the top platform for Qwen3-30B-A3B API?