

Model Comparison
Ring-flash-2.0
vs
Step-3.5-Flash
Feb 15, 2026

Pricing
Input
$
0.14
/ M Tokens
$
0.1
/ M Tokens
Output
$
0.57
/ M Tokens
$
0.3
/ M Tokens
Metadata
Create on
Sep 19, 2025
Feb 1, 2026
License
MIT LICENSE
APACHE 2.0
Provider
inclusionAI
StepFun
Specification
State
Available
Available
Architecture
Mixture-of-Experts (MoE) with 1/32 expert activation ratio and MTP layers, featuring low activation and high sparsity design
Sparse Mixture-of-Experts (MoE) transformer architecture
Calibrated
Yes
No
Mixture of Experts
Yes
Yes
Total Parameters
100B
196B
Activated Parameters
6.1B
11B
Reasoning
No
No
Precision
FP8
FP8
Context length
131K
262K
Max Tokens
131K
66K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Not supported
Not supported
Structured Outputs
Not supported
Not supported
Tools
Not supported
Supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Supported
Supported
Ring-flash-2.0 in Comparison
See how Ring-flash-2.0 compares with other popular models across key dimensions.
