Qwen3-Embedding-8B

Qwen3-Embedding-8B

Qwen/Qwen3-Embedding-8B

About Qwen3-Embedding-8B

Qwen3-Embedding-8B is the latest proprietary model in the Qwen3 Embedding series, specifically designed for text embedding and ranking tasks. Built upon the dense foundational models of the Qwen3 series, this 8B parameter model supports context lengths up to 32K and can generate embeddings with dimensions up to 4096. The model inherits exceptional multilingual capabilities supporting over 100 languages, along with long-text understanding and reasoning skills. It ranks No.1 on the MTEB multilingual leaderboard (as of June 5, 2025, score 70.58) and demonstrates state-of-the-art performance across various tasks including text retrieval, code retrieval, text classification, clustering, and bitext mining. The model offers flexible vector dimensions (32 to 4096) and instruction-aware capabilities for enhanced performance in specific tasks and scenarios

Available Serverless

Run queries immediately, pay only for usage

$

0.04

Per 1M Tokens

Metadata

Create on

Jun 6, 2025

License

apache-2.0

Provider

Qwen

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

No

Total Parameters

8

Activated Parameters

8B

Reasoning

No

Precision

FP8

Context length

33K

Max Tokens

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Not supported

Structured Outputs

Not supported

Tools

Not supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?