Ling-flash-2.0

Ling-flash-2.0

inclusionAI/Ling-flash-2.0

About Ling-flash-2.0

Ling-flash-2.0 is a language model from inclusionAI with a total of 100 billion parameters, of which 6.1 billion are activated per token (4.8 billion non-embedding). As part of the Ling 2.0 architecture series, it is designed as a lightweight yet powerful Mixture-of-Experts (MoE) model. It aims to deliver performance comparable to or even exceeding that of 40B-level dense models and other larger MoE models, but with a significantly smaller active parameter count. The model represents a strategy focused on achieving high performance and efficiency through extreme architectural design and training methods

Available Serverless

Run queries immediately, pay only for usage

$

0.14

/

$

0.57

Per 1M Tokens (input/output)

Metadata

Create on

Sep 18, 2025

License

mit

Provider

inclusionAI

HuggingFace

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

Yes

Total Parameters

100

Activated Parameters

6.1B

Reasoning

No

Precision

FP8

Context length

131K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?