約DeepSeek-R1-Distill-Qwen-1.5B
DeepSeek-R1-Distill-Qwen-1.5Bは、Qwen2.5-Math-1.5Bに基づいて蒸留されたModelです。このModelはDeepSeek-R1によって生成された80万件の厳選されたサンプルを使用して微調整され、様々なベンチマークにおいて優れたパフォーマンスを示します。軽量Modelとして、MATH-500で83.9%の精度、AIME 2024で28.9%の合格率、CodeForcesで954の評価を達成し、そのパラメータ規模を超えた推論能力を示しています。
Explore how DeepSeek-V3's advanced reasoning and coding capabilities translate into real-world applications.
Automated Code Generation & Debugging
Generate, optimize, and debug complex code snippets across various programming languages. The model's strong reasoning helps identify logical errors and suggest efficient solutions.
Use Case Example:
"A software engineer used DeepSeek-V3 to refactor a legacy Python module, resulting in a 40% reduction in code complexity and a 25% improvement in execution speed."
Scientific & Mathematical Research
Assist researchers by solving complex mathematical problems, formulating hypotheses, and analyzing data. Its ability to reason through abstract concepts makes it a powerful tool for scientific discovery.
Use Case Example:
"A physicist modeled a complex quantum mechanics problem, and the model provided a step-by-step derivation that led to a novel insight, which was later verified experimentally."
Intelligent Agent & Tool Integration
Build sophisticated AI agents that can understand user requests, select the appropriate tools (e.g., APIs, databases), and execute multi-step tasks autonomously.
Use Case Example:
"An automated travel assistant powered by DeepSeek-V3 booked a complete itinerary by interacting with flight, hotel, and car rental APIs based on a single natural language request from the user."
Advanced Conversational AI
Create highly engaging and context-aware chatbots, virtual assistants, or role-playing characters for gaming and entertainment. The model excels at maintaining coherent and natural-sounding dialogue.
Use Case Example:
"A gaming company implemented an NPC (Non-Player Character) using the model, which provided dynamic, unscripted interactions that significantly enhanced player immersion."
メタデータ
仕様
州
Deprecated
建築
キャリブレートされた
いいえ
専門家の混合
いいえ
合計パラメータ
2B
アクティブ化されたパラメータ
推論
いいえ
Precision
FP8
コンテキスト長
33K
Max Tokens
他のModelsと比較
他のモデルに対してこのModelがどのように比較されるかを見てください。
DeepSeek
chat
DeepSeek-V3.2
リリース日:2025/12/04
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
0.42
/ M Tokens
DeepSeek
chat
DeepSeek-V3.2-Exp
リリース日:2025/10/10
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
0.41
/ M Tokens
DeepSeek
chat
DeepSeek-V3.1-Terminus
リリース日:2025/09/29
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
1.0
/ M Tokens
DeepSeek
chat
DeepSeek-V3.1
リリース日:2025/08/25
Total Context:
164K
Max output:
164K
Input:
$
0.27
/ M Tokens
Output:
$
1.0
/ M Tokens
DeepSeek
chat
DeepSeek-V3
リリース日:2024/12/26
Total Context:
164K
Max output:
164K
Input:
$
0.25
/ M Tokens
Output:
$
1.0
/ M Tokens
DeepSeek
chat
DeepSeek-R1
リリース日:2025/05/28
Total Context:
164K
Max output:
164K
Input:
$
0.5
/ M Tokens
Output:
$
2.18
/ M Tokens
DeepSeek
chat
DeepSeek-R1-Distill-Qwen-32B
リリース日:2025/01/20
Total Context:
131K
Max output:
131K
Input:
$
0.18
/ M Tokens
Output:
$
0.18
/ M Tokens
DeepSeek
chat
DeepSeek-R1-Distill-Qwen-14B
リリース日:2025/01/20
Total Context:
131K
Max output:
131K
Input:
$
0.1
/ M Tokens
Output:
$
0.1
/ M Tokens
DeepSeek
chat
DeepSeek-R1-Distill-Qwen-7B
リリース日:2025/01/20
Total Context:
33K
Max output:
16K
Input:
$
0.05
/ M Tokens
Output:
$
0.05
/ M Tokens
