DeepSeek-V3-0324 现已在 SiliconFlow 上线

2025年3月27日

目录

DeepSeek-V3-0324(671B)现在在SiliconFlow上线——在推理、写作和数学性能方面提供了重大改进。

  • 强化学习提升了复杂推理任务的性能

  • 8K token上下文窗口

  • 有竞争力的价格:0.29$/百万tokens(input),1.15$/百万tokens(output)

DeepSeek-V3-0324保持了相同的模型大小、定价和API兼容性——但在推理、写作和数学方面带来了重大提升。支持函数调用、JSON Mode、前缀、FIM,这是一个小版本却提供了大的升级。

快速开始

SiliconFlow模型广场上试试DeepSeek-V3-0324。

快速访问API

以下Python示例演示了如何通过SiliconFlow的API端点调用DeepSeek-V3-0324模型。为开发者提供更详细的API规格。

from openai import OpenAI

url = 'https://api.siliconflow.com/v1/'
api_key = 'your api_key'

client = OpenAI(
    base_url=url,
    api_key=api_key
)

# Send a request with streaming output
content = ""
reasoning_content = ""
messages = [
    {"role": "user", "content": "Prove the Pythagorean theorem and provide a simple example."}
]
response = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3",
    messages=messages,
    stream=True,  # Enable streaming output
    max_tokens=4096,
    extra_body={
        "thinking_budget": 1024
    }
)
# Gradually receive and process the response
for chunk in response:
    if chunk.choices[0].delta.content:
        content += chunk.choices[0].delta.content
    if chunk.choices[0].delta.reasoning_content:
        reasoning_content += chunk.choices[0].delta.reasoning_content

# Round 2
messages.append({"role": "assistant", "content": content})
messages.append({'role': 'user', 'content': "Continue"})
response = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3",
    messages=messages,
    stream=True
)
from openai import OpenAI

url = 'https://api.siliconflow.com/v1/'
api_key = 'your api_key'

client = OpenAI(
    base_url=url,
    api_key=api_key
)

# Send a request with streaming output
content = ""
reasoning_content = ""
messages = [
    {"role": "user", "content": "Prove the Pythagorean theorem and provide a simple example."}
]
response = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3",
    messages=messages,
    stream=True,  # Enable streaming output
    max_tokens=4096,
    extra_body={
        "thinking_budget": 1024
    }
)
# Gradually receive and process the response
for chunk in response:
    if chunk.choices[0].delta.content:
        content += chunk.choices[0].delta.content
    if chunk.choices[0].delta.reasoning_content:
        reasoning_content += chunk.choices[0].delta.reasoning_content

# Round 2
messages.append({"role": "assistant", "content": content})
messages.append({'role': 'user', 'content': "Continue"})
response = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3",
    messages=messages,
    stream=True
)
from openai import OpenAI

url = 'https://api.siliconflow.com/v1/'
api_key = 'your api_key'

client = OpenAI(
    base_url=url,
    api_key=api_key
)

# Send a request with streaming output
content = ""
reasoning_content = ""
messages = [
    {"role": "user", "content": "Prove the Pythagorean theorem and provide a simple example."}
]
response = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3",
    messages=messages,
    stream=True,  # Enable streaming output
    max_tokens=4096,
    extra_body={
        "thinking_budget": 1024
    }
)
# Gradually receive and process the response
for chunk in response:
    if chunk.choices[0].delta.content:
        content += chunk.choices[0].delta.content
    if chunk.choices[0].delta.reasoning_content:
        reasoning_content += chunk.choices[0].delta.reasoning_content

# Round 2
messages.append({"role": "assistant", "content": content})
messages.append({'role': 'user', 'content': "Continue"})
response = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3",
    messages=messages,
    stream=True
)

DeepSeek-V3-0324经过优化,可应对严苛的推理和现实应用,准备好帮助您构建更智能的工具、更好的代理和长篇助手。

现在就在SiliconFlow上试用DeepSeek-V3,看看您能创造什么。

准备好 加速您的人工智能开发吗?

准备好 加速您的人工智能开发吗?

准备好 加速您的人工智能开发吗?