DeepSeek-V4-Pro

DeepSeek-V4-Pro

deepseek-ai/DeepSeek-V4-Pro

About DeepSeek-V4-Pro

DeepSeek-V4-Pro is DeepSeek's flagship open-source MoE model with 1.6T total parameters and 49B activated, purpose-built for frontier-level reasoning, coding, and agentic tasks. Supporting a 1M-token context window and three reasoning effort modes up to Think Max, it achieves top-tier performance on coding benchmarks such as LiveCodeBench and Codeforces — rivaling leading closed-source models — and is released under the MIT License.

Available Serverless

Run queries immediately, pay only for usage

Input Price

$

1.74

/ M Tokens

Cache Read

$

0.145

/ M Tokens

Output Price

$

3.48

/ M Tokens

Metadata

Create on

License

MIT

Provider

DeepSeek

HuggingFace

Specification

State

Available

Architecture

Hybrid Attention MoE

Calibrated

Yes

Mixture of Experts

Yes

Total Parameters

862B

Activated Parameters

49B

Reasoning

No

Precision

FP8

Context length

1049K

Max Tokens

393K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?