Ling-flash-2.0
About Ling-flash-2.0
Ling-flash-2.0 is a language model from inclusionAI with a total of 100 billion parameters, of which 6.1 billion are activated per token (4.8 billion non-embedding). As part of the Ling 2.0 architecture series, it is designed as a lightweight yet powerful Mixture-of-Experts (MoE) model. It aims to deliver performance comparable to or even exceeding that of 40B-level dense models and other larger MoE models, but with a significantly smaller active parameter count. The model represents a strategy focused on achieving high performance and efficiency through extreme architectural design and training methods
Available Serverless
Run queries immediately, pay only for usage
$
0.14
/
$
0.57
Per 1M Tokens (input/output)
Metadata
Specification
State
Available
Architecture
Calibrated
No
Mixture of Experts
Yes
Total Parameters
100B
Activated Parameters
6.1B
Reasoning
No
Precision
FP8
Context length
131K
Max Tokens
131K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Supported
Structured Outputs
Not supported
Tools
Supported
Fim Completion
Not supported
Chat Prefix Completion
Supported
SiliconFlow Service
Comprehensive solutions to deploy and scale your AI applications with maximum flexibility
60%
lower latency
2x
higher throughput
65%
cost savings
Compare with Other Models
See how this model stacks up against others.

inclusionAI
chat
Ring-flash-2.0
Release on: Sep 29, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.14
/ M Tokens
Output:
$
0.57
/ M Tokens

inclusionAI
chat
Ling-flash-2.0
Release on: Sep 18, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.14
/ M Tokens
Output:
$
0.57
/ M Tokens

inclusionAI
chat
Ling-mini-2.0
Release on: Sep 10, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.07
/ M Tokens
Output:
$
0.28
/ M Tokens
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
