Qwen3-30B-A3B

Qwen3-30B-A3B

Qwen/Qwen3-30B-A3B

About Qwen3-30B-A3B

Qwen3-30B-A3B is the latest large language model in the Qwen series, featuring a Mixture-of-Experts (MoE) architecture with 30.5B total parameters and 3.3B activated parameters. This model uniquely supports seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue). It demonstrates significantly enhanced reasoning capabilities, superior human preference alignment in creative writing, role-playing, and multi-turn dialogues. The model excels in agent capabilities for precise integration with external tools and supports over 100 languages and dialects with strong multilingual instruction following and translation capabilities

Available Serverless

Run queries immediately, pay only for usage

$

0.09

/

$

0.45

Per 1M Tokens (input/output)

Metadata

Create on

Apr 30, 2025

License

apache-2.0

Provider

Qwen

HuggingFace

Specification

State

Available

Architecture

Calibrated

Yes

Mixture of Experts

Yes

Total Parameters

-1

Activated Parameters

3.3B

Reasoning

No

Precision

FP8

Context length

131K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?