
Qwen3-30B-A3B-Thinking-2507 API, Fine-Tuning, Deployment
Qwen/Qwen3-30B-A3B-Thinking-2507
Qwen3-30B-A3B-Thinking-2507 is the latest thinking model in the Qwen3 series, released by Alibaba's Qwen team. As a Mixture-of-Experts (MoE) model with 30.5 billion total parameters and 3.3 billion active parameters, it is focused on enhancing capabilities for complex tasks. The model demonstrates significantly improved performance on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise. It also shows markedly better general capabilities, such as instruction following, tool usage, text generation, and alignment with human preferences. The model natively supports a 256K long-context understanding capability, which can be extended to 1 million tokens. This version is specifically designed for ‘thinking mode’ to tackle highly complex problems through step-by-step reasoning and also excels in agentic capabilities
Details
Model Provider
Qwen
Type
text
Sub Type
chat
Size
30B
Publish Time
Jul 31, 2025
Input Price
$
0.1
/ M Tokens
Output Price
$
0.4
/ M Tokens
Context length
256K
Tags
Reasoning,MoE,30B,256K
Compare with Other Models
See how this model stacks up against others.