
Qwen3-30B-A3B-Instruct-2507 API, Fine-Tuning, Deployment
Qwen/Qwen3-30B-A3B-Instruct-2507
Qwen3-30B-A3B-Instruct-2507 is the updated version of the Qwen3-30B-A3B non-thinking mode. It is a Mixture-of-Experts (MoE) model with 30.5 billion total parameters and 3.3 billion activated parameters. This version features key enhancements, including significant improvements in general capabilities such as instruction following, logical reasoning, text comprehension, mathematics, science, coding, and tool usage. It also shows substantial gains in long-tail knowledge coverage across multiple languages and offers markedly better alignment with user preferences in subjective and open-ended tasks, enabling more helpful responses and higher-quality text generation. Furthermore, its capabilities in long-context understanding have been enhanced to 256K. This model supports only non-thinking mode and does not generate `<think></think>` blocks in its output
Details
Model Provider
Qwen
Type
text
Sub Type
chat
Size
30B
Publish Time
Jul 30, 2025
Input Price
$
0.1
/ M Tokens
Output Price
$
0.4
/ M Tokens
Context length
256K
Tags
MoE,30B,256K
Compare with Other Models
See how this model stacks up against others.