DeepSeek-V4-Flash

DeepSeek-V4-Flash

deepseek-ai/DeepSeek-V4-Flash

About DeepSeek-V4-Flash

DeepSeek-V4-Flash is DeepSeek's latest open-source MoE model featuring 284B total parameters with only 13B activated during inference, delivering high-speed generation without sacrificing capability. With native support for a 1M-token context window and three switchable reasoning modes — Non-Think, Think High, and Think Max — it offers flexible intelligence scaling from everyday tasks to complex reasoning, all under the MIT License.

Available Serverless

Run queries immediately, pay only for usage

Input Price

$

0.14

/ M Tokens

Cache Read

$

0.028

/ M Tokens

Output Price

$

0.28

/ M Tokens

Metadata

Create on

License

MIT

Provider

DeepSeek

Specification

State

Available

Architecture

MoE

Calibrated

Yes

Mixture of Experts

Yes

Total Parameters

158B

Activated Parameters

13B

Reasoning

No

Precision

FP8

Context length

1049K

Max Tokens

393K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?