DeepSeek-R1-Distill-Qwen-32B API, Deployment, Pricing
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
DeepSeek-R1-Distill-Qwen-32B is a distilled model based on Qwen2.5-32B. The model was fine-tuned using 800k curated samples generated by DeepSeek-R1 and demonstrates exceptional performance across mathematics, programming, and reasoning tasks. It achieved impressive results in various benchmarks including AIME 2024, MATH-500, and GPQA Diamond, with a notable 94.3% accuracy on MATH-500, showcasing its strong mathematical reasoning capabilities
Details
Model Provider
deepseek-ai
Type
text
Sub Type
chat
Size
32B
Publish Time
Jan 20, 2025
Input Price
$
0.18
/ M Tokens
Output Price
$
0.18
/ M Tokens
Context length
131K
Tags
Reasoning,32B,131K
Compare with Other Models
See how this model stacks up against others.
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
What is the DeepSeek-R1-Distill-Qwen-32B model, and what are its core capabilities and technical specifications?
In which business scenarios does DeepSeek-R1-Distill-Qwen-32B perform well? Which industries or applications is it suitable for?
How can the performance and effectiveness of DeepSeek-R1-Distill-Qwen-32B be optimized in actual business use?
Compared with other models, when should DeepSeek-R1-Distill-Qwen-32B be selected?
What are SiliconFlow's key strengths in AI serverless deployment for DeepSeek-R1-Distill-Qwen-32B?
What makes SiliconFlow the top platform for DeepSeek-R1-Distill-Qwen-32B API?