QwQ-32B

QwQ-32B

Qwen/QwQ-32B

About QwQ-32B

QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini. The model incorporates technologies like RoPE, SwiGLU, RMSNorm, and Attention QKV bias, with 64 layers and 40 Q attention heads (8 for KV in GQA architecture)

Available Serverless

Run queries immediately, pay only for usage

$

0.15

/

$

0.58

Per 1M Tokens (input/output)

Metadata

Create on

Mar 6, 2025

License

apache-2.0

Provider

Qwen

HuggingFace

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

No

Total Parameters

32

Activated Parameters

32.5B

Reasoning

No

Precision

FP8

Context length

131K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Not supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?