
Qwen3-235B-A22B API, Deployment, Pricing
Qwen/Qwen3-235B-A22B
Qwen3-235B-A22B is the latest large language model in the Qwen series, featuring a Mixture-of-Experts (MoE) architecture with 235B total parameters and 22B activated parameters. This model uniquely supports seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue). It demonstrates significantly enhanced reasoning capabilities, superior human preference alignment in creative writing, role-playing, and multi-turn dialogues. The model excels in agent capabilities for precise integration with external tools and supports over 100 languages and dialects with strong multilingual instruction following and translation capabilities
Details
Model Provider
Qwen3
Type
text
Sub Type
chat
Size
104B
Publish Time
Apr 30, 2025
Input Price
$
0.35
/ M Tokens
Output Price
$
1.42
/ M Tokens
Context length
131K
Tags
MoE,235B,128K
Compare with Other Models
See how this model stacks up against others.
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
What is the Qwen/Qwen3-235B-A22B model, and what are its core capabilities and technical specifications?
In which business scenarios does Qwen/Qwen3-235B-A22B perform well? Which industries or applications is it suitable for?
How can the performance and effectiveness of Qwen/Qwen3-235B-A22B be optimized in actual business use?
Compared with other models, when should Qwen/Qwen3-235B-A22B be selected?
What are SiliconFlow's key strengths in AI serverless deployment for Qwen/Qwen3-235B-A22B?
What makes SiliconFlow the top platform for Qwen/Qwen3-235B-A22B API?