DeepSeek-V3-0324 (671B) is now live on SiliconFlow — delivering major improvements in reasoning, writing, and math performance.
Reinforcement learning boosts performance on complex reasoning tasks
8K token context window
Competitive price: $0.29/M tokens (input), $1.15/M tokens (output)
DeepSeek-V3-0324 keeps the same model size, pricing, and API compatibility — but brings major gains in reasoning, writing, and math. With support for Function Calling, JSON mode, Prefix, FIM, it’s a small version that delivers a big upgrade.
Quick Start
Try the DeepSeek-V3-0324 on the SiliconFlow playground.
Quick Access to API
The following Python example demonstrates how to call the DeepSeek-V3-0324 model via SiliconFlow’s API endpoint. More detailed API specifications for developers.
from openai import OpenAI
url = 'https://api.siliconflow.com/v1/'
api_key = 'your api_key'
client = OpenAI(
base_url=url,
api_key=api_key
)
content = ""
reasoning_content = ""
messages = [
{"role": "user", "content": "Prove the Pythagorean theorem and provide a simple example."}
]
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V3",
messages=messages,
stream=True,
max_tokens=4096,
extra_body={
"thinking_budget": 1024
}
)
for chunk in response:
if chunk.choices[0].delta.content:
content += chunk.choices[0].delta.content
if chunk.choices[0].delta.reasoning_content:
reasoning_content += chunk.choices[0].delta.reasoning_content
messages.append({"role": "assistant", "content": content})
messages.append({'role': 'user', 'content': "Continue"})
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V3",
messages=messages,
stream=True
)from openai import OpenAI
url = 'https://api.siliconflow.com/v1/'
api_key = 'your api_key'
client = OpenAI(
base_url=url,
api_key=api_key
)
content = ""
reasoning_content = ""
messages = [
{"role": "user", "content": "Prove the Pythagorean theorem and provide a simple example."}
]
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V3",
messages=messages,
stream=True,
max_tokens=4096,
extra_body={
"thinking_budget": 1024
}
)
for chunk in response:
if chunk.choices[0].delta.content:
content += chunk.choices[0].delta.content
if chunk.choices[0].delta.reasoning_content:
reasoning_content += chunk.choices[0].delta.reasoning_content
messages.append({"role": "assistant", "content": content})
messages.append({'role': 'user', 'content': "Continue"})
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V3",
messages=messages,
stream=True
)from openai import OpenAI
url = 'https://api.siliconflow.com/v1/'
api_key = 'your api_key'
client = OpenAI(
base_url=url,
api_key=api_key
)
content = ""
reasoning_content = ""
messages = [
{"role": "user", "content": "Prove the Pythagorean theorem and provide a simple example."}
]
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V3",
messages=messages,
stream=True,
max_tokens=4096,
extra_body={
"thinking_budget": 1024
}
)
for chunk in response:
if chunk.choices[0].delta.content:
content += chunk.choices[0].delta.content
if chunk.choices[0].delta.reasoning_content:
reasoning_content += chunk.choices[0].delta.reasoning_content
messages.append({"role": "assistant", "content": content})
messages.append({'role': 'user', 'content': "Continue"})
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V3",
messages=messages,
stream=True
)Optimized for demanding reasoning and real-world use, DeepSeek-V3-0324 is ready to help you build smarter tools, better agents, and long-form assistants.
Try DeepSeek-V3 on SiliconFlow now and see what you can create.