Qwen3-30B-A3B
Qwen/Qwen3-30B-A3B
Qwen3-30B-A3B is the latest large language model in the Qwen series, featuring a Mixture-of-Experts (MoE) architecture with 30.5B total parameters and 3.3B activated parameters. This model uniquely supports seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue). It demonstrates significantly enhanced reasoning capabilities, superior human preference alignment in creative writing, role-playing, and multi-turn dialogues. The model excels in agent capabilities for precise integration with external tools and supports over 100 languages and dialects with strong multilingual instruction following and translation capabilities

Details
Model Provider
Qwen3
Type
text
Sub Type
chat
Size
-1
Publish Time
Apr 30, 2025
Input Price
$
0.1
/ M Tokens
Output Price
$
0.4
/ M Tokens
Context length
131072
Tags
Reasoning,MoE,30B,128K