Qwen3-Next-80B-A3B-Thinking
About Qwen3-Next-80B-A3B-Thinking
Qwen3-Next-80B-A3B-Thinking is a next-generation foundation model from Alibaba's Qwen team, specifically designed for complex reasoning tasks. It is built on the innovative Qwen3-Next architecture, which combines a Hybrid Attention mechanism (Gated DeltaNet and Gated Attention) with a High-Sparsity Mixture-of-Experts (MoE) structure to achieve ultimate training and inference efficiency. As an 80-billion-parameter sparse model, it activates only about 3 billion parameters during inference, significantly reducing computational costs and delivering over 10 times higher throughput than the Qwen3-32B model on long-context tasks exceeding 32K tokens. This 'Thinking' version is optimized for demanding multi-step problems like mathematical proofs, code synthesis, logical analysis, and agentic planning, and it outputs structured 'thinking' traces by default. In terms of performance, it surpasses more costly models like Qwen3-32B-Thinking and has outperformed Gemini-2.5-Flash-Thinking on multiple benchmarks
Available Serverless
Run queries immediately, pay only for usage
$
0.14
/
$
0.57
Per 1M Tokens (input/output)
Metadata
Specification
State
Available
Architecture
Calibrated
No
Mixture of Experts
Yes
Total Parameters
80B
Activated Parameters
3B
Reasoning
No
Precision
FP8
Context length
262K
Max Tokens
262K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Not supported
Structured Outputs
Not supported
Tools
Supported
Fim Completion
Not supported
Chat Prefix Completion
Supported
Compare with Other Models
See how this model stacks up against others.

Qwen
chat
Qwen3-VL-32B-Instruct
Release on: Oct 21, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.2
/ M Tokens
Output:
$
0.6
/ M Tokens

Qwen
chat
Qwen3-VL-32B-Thinking
Release on: Oct 21, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.2
/ M Tokens
Output:
$
1.5
/ M Tokens

Qwen
chat
Qwen3-VL-8B-Instruct
Release on: Oct 15, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.18
/ M Tokens
Output:
$
0.68
/ M Tokens

Qwen
chat
Qwen3-VL-8B-Thinking
Release on: Oct 15, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.18
/ M Tokens
Output:
$
2.0
/ M Tokens

Qwen
chat
Qwen3-VL-235B-A22B-Instruct
Release on: Oct 4, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.3
/ M Tokens
Output:
$
1.5
/ M Tokens

Qwen
chat
Qwen3-VL-235B-A22B-Thinking
Release on: Oct 4, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.45
/ M Tokens
Output:
$
3.5
/ M Tokens

Qwen
chat
Qwen3-VL-30B-A3B-Instruct
Release on: Oct 5, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.29
/ M Tokens
Output:
$
1.0
/ M Tokens

Qwen
chat
Qwen3-VL-30B-A3B-Thinking
Release on: Oct 11, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.29
/ M Tokens
Output:
$
1.0
/ M Tokens

Qwen
image-to-video
Wan2.2-I2V-A14B
Release on: Aug 13, 2025
$
0.29
/ Video
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
