
Qwen3-Embedding-0.6B API, Deployment, Pricing
Qwen/Qwen3-Embedding-0.6B
Qwen3-Embedding-0.6B is the latest proprietary model in the Qwen3 Embedding series, specifically designed for text embedding and ranking tasks. Built upon the dense foundational models of the Qwen3 series, this 0.6B parameter model supports context lengths up to 32K and can generate embeddings with dimensions up to 1024. The model inherits exceptional multilingual capabilities supporting over 100 languages, along with long-text understanding and reasoning skills. It achieves strong performance on the MTEB multilingual leaderboard (score 64.33) and demonstrates excellent results across various tasks including text retrieval, code retrieval, text classification, clustering, and bitext mining. The model offers flexible vector dimensions (32 to 1024) and instruction-aware capabilities for enhanced performance in specific tasks and scenarios, making it an ideal choice for applications prioritizing both efficiency and effectiveness
Details
Model Provider
Qwen
Type
text
Sub Type
embedding
Size
1B
Publish Time
Jun 6, 2025
Input Price
$
0.01
/ M Tokens
Context length
33K
Tags
1024 dim,33K
Compare with Other Models
See how this model stacks up against others.
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
What is the Qwen/Qwen3-Embedding-0.6B model, and what are its core capabilities and technical specifications?
In which business scenarios does Qwen/Qwen3-Embedding-0.6B perform well? Which industries or applications is it suitable for?
How can the performance and effectiveness of Qwen/Qwen3-Embedding-0.6B be optimized in actual business use?
Compared with other models, when should Qwen/Qwen3-Embedding-0.6B be selected?
What are SiliconFlow's key strengths in AI serverless deployment for Qwen/Qwen3-Embedding-0.6B?
What makes SiliconFlow the top platform for Qwen/Qwen3-Embedding-0.6B API?