DeepSeek-R1-Distill-Qwen-7B

DeepSeek-R1-Distill-Qwen-7B

deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

About DeepSeek-R1-Distill-Qwen-7B

DeepSeek-R1-Distill-Qwen-7B is a distilled model based on Qwen2.5-Math-7B. The model was fine-tuned using 800k curated samples generated by DeepSeek-R1 and demonstrates strong reasoning capabilities. It achieved impressive results across various benchmarks, including 92.8% accuracy on MATH-500, 55.5% pass rate on AIME 2024, and a rating of 1189 on CodeForces, showing remarkable mathematical and programming abilities for a 7B-scale model

Available Serverless

Run queries immediately, pay only for usage

$

0.05

/

$

0.05

Per 1M Tokens (input/output)

Metadata

Create on

Jan 20, 2025

License

mit

Provider

DeepSeek

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

No

Total Parameters

7

Activated Parameters

7B

Reasoning

No

Precision

FP8

Context length

33K

Max Tokens

16K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Supported

Chat Prefix Completion

Not supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?