Qwen3-Next-80B-A3B-Instruct
About Qwen3-Next-80B-A3B-Instruct
Qwen3-Next-80B-A3B-Instruct is a next-generation foundation model released by Alibaba's Qwen team. It is built on the new Qwen3-Next architecture, designed for ultimate training and inference efficiency. The model incorporates innovative features such as a Hybrid Attention mechanism (Gated DeltaNet and Gated Attention), a High-Sparsity Mixture-of-Experts (MoE) structure, and various stability optimizations. As an 80-billion-parameter sparse model, it activates only about 3 billion parameters per token during inference, which significantly reduces computational costs and delivers over 10 times higher throughput than the Qwen3-32B model for long-context tasks exceeding 32K tokens. This is an instruction-tuned version optimized for general-purpose tasks and does not support 'thinking' mode. In terms of performance, it is comparable to Qwen's flagship model, Qwen3-235B, on certain benchmarks, showing significant advantages in ultra-long-context scenarios
Available Serverless
Run queries immediately, pay only for usage
$
0.14
/
$
1.4
Per 1M Tokens (input/output)
Metadata
Specification
State
Available
Architecture
Calibrated
No
Mixture of Experts
Yes
Total Parameters
80B
Activated Parameters
3B
Reasoning
No
Precision
FP8
Context length
262K
Max Tokens
262K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Supported
Structured Outputs
Not supported
Tools
Supported
Fim Completion
Not supported
Chat Prefix Completion
Supported
SiliconFlow Service
Comprehensive solutions to deploy and scale your AI applications with maximum flexibility
60%
lower latency
2x
higher throughput
65%
cost savings
Compare with Other Models
See how this model stacks up against others.

Qwen
chat
Qwen3-VL-32B-Instruct
Release on: Oct 21, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.2
/ M Tokens
Output:
$
0.6
/ M Tokens

Qwen
chat
Qwen3-VL-32B-Thinking
Release on: Oct 21, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.2
/ M Tokens
Output:
$
1.5
/ M Tokens

Qwen
chat
Qwen3-VL-8B-Instruct
Release on: Oct 15, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.18
/ M Tokens
Output:
$
0.68
/ M Tokens

Qwen
chat
Qwen3-VL-8B-Thinking
Release on: Oct 15, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.18
/ M Tokens
Output:
$
2.0
/ M Tokens

Qwen
chat
Qwen3-VL-235B-A22B-Instruct
Release on: Oct 4, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.3
/ M Tokens
Output:
$
1.5
/ M Tokens

Qwen
chat
Qwen3-VL-235B-A22B-Thinking
Release on: Oct 4, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.45
/ M Tokens
Output:
$
3.5
/ M Tokens

Qwen
chat
Qwen3-VL-30B-A3B-Instruct
Release on: Oct 5, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.29
/ M Tokens
Output:
$
1.0
/ M Tokens

Qwen
chat
Qwen3-VL-30B-A3B-Thinking
Release on: Oct 11, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.29
/ M Tokens
Output:
$
1.0
/ M Tokens

Qwen
image-to-video
Wan2.2-I2V-A14B
Release on: Aug 13, 2025
$
0.29
/ Video
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
