Qwen3-Embedding-4B

Qwen3-Embedding-4B

Qwen/Qwen3-Embedding-4B

About Qwen3-Embedding-4B

Qwen3-Embedding-4B is the latest proprietary model in the Qwen3 Embedding series, specifically designed for text embedding and ranking tasks. Built upon the dense foundational models of the Qwen3 series, this 4B parameter model supports context lengths up to 32K and can generate embeddings with dimensions up to 2560. The model inherits exceptional multilingual capabilities supporting over 100 languages, along with long-text understanding and reasoning skills. It achieves excellent performance on the MTEB multilingual leaderboard (score 69.45) and demonstrates outstanding results across various tasks including text retrieval, code retrieval, text classification, clustering, and bitext mining. The model offers flexible vector dimensions (32 to 2560) and instruction-aware capabilities for enhanced performance in specific tasks and scenarios, providing an optimal balance between efficiency and effectiveness

Available Serverless

Run queries immediately, pay only for usage

$

0.02

Per 1M Tokens

Metadata

Create on

Jun 6, 2025

License

apache-2.0

Provider

Qwen

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

No

Total Parameters

4B

Activated Parameters

4B

Reasoning

No

Precision

FP8

Context length

33K

Max Tokens

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Not supported

Structured Outputs

Not supported

Tools

Not supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?