Qwen3-Next-80B-A3B-Instruct

Qwen3-Next-80B-A3B-Instruct

Qwen/Qwen3-Next-80B-A3B-Instruct

About Qwen3-Next-80B-A3B-Instruct

Qwen3-Next-80B-A3B-Instruct is a next-generation foundation model released by Alibaba's Qwen team. It is built on the new Qwen3-Next architecture, designed for ultimate training and inference efficiency. The model incorporates innovative features such as a Hybrid Attention mechanism (Gated DeltaNet and Gated Attention), a High-Sparsity Mixture-of-Experts (MoE) structure, and various stability optimizations. As an 80-billion-parameter sparse model, it activates only about 3 billion parameters per token during inference, which significantly reduces computational costs and delivers over 10 times higher throughput than the Qwen3-32B model for long-context tasks exceeding 32K tokens. This is an instruction-tuned version optimized for general-purpose tasks and does not support 'thinking' mode. In terms of performance, it is comparable to Qwen's flagship model, Qwen3-235B, on certain benchmarks, showing significant advantages in ultra-long-context scenarios

Available Serverless

Run queries immediately, pay only for usage

$

0.14

/

$

1.4

Per 1M Tokens (input/output)

Metadata

Create on

Sep 18, 2025

License

apache-2.0

Provider

Qwen

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

Yes

Total Parameters

80

Activated Parameters

3B

Reasoning

No

Precision

FP8

Context length

262K

Max Tokens

262K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?