deepseek-vl2

deepseek-vl2

deepseek-ai/deepseek-vl2

About deepseek-vl2

DeepSeek-VL2 is a mixed-expert (MoE) vision-language model developed based on DeepSeekMoE-27B, employing a sparse-activated MoE architecture to achieve superior performance with only 4.5B active parameters. The model excels in various tasks including visual question answering, optical character recognition, document/table/chart understanding, and visual grounding. Compared to existing open-source dense models and MoE-based models, it demonstrates competitive or state-of-the-art performance using the same or fewer active parameters.

Available Serverless

Run queries immediately, pay only for usage

$

0.15

/

$

0.15

Per 1M Tokens (input/output)

Metadata

Create on

Dec 13, 2024

License

other

Provider

DeepSeek

HuggingFace

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

Yes

Total Parameters

-1

Activated Parameters

4.5B

Reasoning

No

Precision

FP8

Context length

4K

Max Tokens

4K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Not supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?