Qwen/Qwen3-235B-A22B-Thinking-2507

Qwen/Qwen3-235B-A22B-Thinking-2507

Qwen3-235B-A22B-Thinking-2507 is a member of the Qwen3 large language model series developed by Alibaba's Qwen team, specializing in highly complex reasoning tasks. The model is built on a Mixture-of-Experts (MoE) architecture, with 235 billion total parameters and approximately 22 billion activated parameters per token, which enhances computational efficiency while maintaining powerful performance. As a dedicated 'thinking' model, it demonstrates significantly improved performance on tasks requiring human expertise, such as logical reasoning, mathematics, science, coding, and academic benchmarks, achieving state-of-the-art results among open-source thinking models. Furthermore, the model features enhanced general capabilities like instruction following, tool usage, and text generation, and it natively supports a 256K long-context understanding capability, making it ideal for scenarios that require deep reasoning and processing of long documents

API Usage

curl --request POST \
  --url https://api.siliconflow.com/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "Qwen/Qwen3-235B-A22B-Thinking-2507",
  "messages": [
    {
      "role": "user",
      "content": "Tell me a story"
    }
  ]
}'

Details

Model Provider

Type

text

Sub Type

chat

Size

235B

Publish Time

Jul 28, 2025

Input Price

$

0.35

/ M Tokens

Output Price

$

1.42

/ M Tokens

Context length

256K

Tags

MoE,235B,256K

Ready to accelerate your AI development?

Ready to accelerate your AI development?

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.