Qwen3-235B-A22B-Instruct-2507: Leading Open-Source Model Now Live on SiliconFlow

Jul 28, 2025

Qwen3-235B-A22B-Instruct-2507: Leading Open-Source Model Now Live on SiliconFlow
Qwen3-235B-A22B-Instruct-2507: Leading Open-Source Model Now Live on SiliconFlow

Qwen has just released Qwen3-235B-A22B-Instruct-2507, an upgraded version of its flagship model Qwen3-235B-A22B Non-thinking. This represents a major leap forward in the open-source domain, bringing enhanced general capabilities and superior reasoning performance, and is now available on SiliconFlow.

This cutting-edge model delivers significant improvements in instruction following, logical reasoning, mathematics, coding, and tool usage. According to the comprehensive benchmarks, it outperforms leading open-source models like Kimi-K2 and DeepSeek-V3-0324, as well as proprietary models like Claude-Opus4-Non-thinking. Whether you're building enterprise applications, conducting advanced research, creating multilingual content or developing intelligent assistants, this model handles these tasks with exceptional performance.

With SiliconFlow's Qwen3-235B-A22B-Instruct-2507 API, you can expect:

  • High-Speed Inference: Optimized for lower latency and higher throughput.

  • Cost-Effective Pricing: $0.35/M tokens (input) and $1.42/M tokens (output).

  • Extended Context Window: 256K context window for complex tasks.

Enhanced capabilities & Superior performance

The updated Qwen3-235B-A22B-Instruct-2507 now available on SiliconFlow features the following key enhancements:

  • Enhanced General Capabilities: Improved instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage.

  • Better User Alignment: More precise alignment with user preferences in subjective and open-ended tasks, enabling more helpful and higher-quality responses.

  • Expanded Multilingual Knowledge: Substantial gains in long-tail knowledge coverage across multiple languages, including specialized, domain-specific, and less common information.

  • Extended Context Understanding: 256K long-context understanding capabilities.

These capabilities are clearly demonstrated in comprehensive benchmark evaluations, where Qwen3 consistently outperforms leading competitors:

  • Advanced Scientific Reasoning: 77.5 on GPQA, outperforming Kimi K2 (75.1) and Claude Opus 4 Non-thinking (74.9), demonstrating exceptional performance in graduate-level scientific reasoning and complex problem-solving capabilities.

  • Mathematical Problem-Solving: 70.3 on AIME25, significantly ahead of Kimi K2 (49.5) and DeepSeek-V3-0324 (46.6), proving advanced competitive mathematics skills.

  • Real-World Coding Performance: 51.8 on LiveCodeBench v6, surpassing Kimi K2 (48.9) and DeepSeek-V3-0324 (45.2), validating strong programming capabilities in practical scenarios.

  • Excellent Conversational Performance: 79.2 on Arena-Hard v2, outperforming DeepSeek-V3 (66.1) and Qwen3-235B-A22B Non-thinking (52.0), confirming superior capabilities in complex, open-ended tasks and strong alignment with human preferences.

  • Tool Usage and Function Calling: 70.9 on BFCL-v3, leading Qwen3-235B-A22B Non-thinking (68.0) and Kimi K2 (65.2), demonstrating advanced capabilities in external tool integration and API usage.

These impressive results underscore a significant milestone in open-source AI development. Qwen3-235B-A22B-Instruct-2507 not only matches but also surpasses proprietary models like Claude Opus 4 Non-thinking across multiple benchmarks, demonstrating that open-source models are reaching new heights of capability.

Get Started Immediately

  1. Explore: Try Qwen3-235B-A22B-Instruct-2507 in the SiliconFlow playground.

  2. Integrate: Use our OpenAI-compatible API. Explore the full API specifications in the SiliconFlow API documentation.

import requests

url = "https://api.siliconflow.com/v1/chat/completions"

payload = {
    "model": "Qwen/Qwen3-235B-A22B-Instruct-2507",
    "max_tokens": 512,
    "min_p": 0.05,
    "temperature": 0.7,
    "top_p": 0.7,
    "top_k": 50,
    "frequency_penalty": 0.5,
    "messages": [
        {
            "content": "What opportunities and challenges will the Chinese large model industry face in 2025?",
            "role": "user"
        }
    ]
}
headers = {
    "Authorization": "Bearer ",
    "Content-Type": "application/json"
}

response = requests.request("POST", url, json=payload, headers=headers)

print(response.text)

Try it now on SiliconFlow and explore these powerful capabilities firsthand!

Ready to accelerate your AI development?

Ready to accelerate your AI development?

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.