Ring-1T
About Ring-1T
Ring-1T is an open-source, trillion-parameter thinking model released by the Bailing team. Built upon the Ling 2.0 architecture and the Ling-1T-base foundation model, it features 1 trillion total parameters with 50 billion activated parameters and supports a context window of up to 131K tokens. The model's deep reasoning and natural language inference capabilities have been significantly enhanced through large-scale verifiable reward reinforcement learning (RLVR), combined with the self-developed icepop reinforcement learning stabilization method and the efficient ASystem RL framework. Ring-1T achieves leading open-source performance on challenging reasoning benchmarks, including math competitions (e.g., IMO 2025), code generation (e.g., ICPC World Finals 2025), and logical reasoning
Discover how Ring-1T's trillion-parameter reasoning tackles intricate challenges across diverse domains.
Advanced Math & Proofs
Excel in complex mathematical challenges, generating and verifying proofs for theoretical physics, engineering, or competitive math.
Use Case Example:
"Solved a challenging number theory problem from IMO 2025, providing a rigorous, step-by-step proof that earned a silver medal equivalent."
Elite Code & Debugging
Master algorithmic coding, identify subtle logical errors, and optimize performance across various programming languages and system architectures.
Use Case Example:
"Debugged a critical concurrency bug in a high-performance Rust web server, pinpointing the exact race condition and suggesting an atomic operation fix."
Strategic Causal Analysis
Perform multi-step quantitative and qualitative analysis on vast datasets, inferring causal relationships for strategic recommendations in business or policy.
Use Case Example:
"Analyzed global supply chain data and geopolitical events to predict future disruptions, advising a manufacturing firm on proactive risk mitigation strategies."
Formal System Verification
Audit complex systems, from legal frameworks to engineering schematics, by reasoning through logical dependencies, identifying inconsistencies, and ensuring compliance.
Use Case Example:
"Formally verified the security properties of a new blockchain consensus mechanism written in Solidity, uncovering a reentrancy vulnerability before deployment."
Long-Context Knowledge Synthesis
Synthesize vast amounts of information from extensive documents (up to 131K tokens), generating comprehensive reports, literature reviews, or legal summaries.
Use Case Example:
"Consolidated thousands of medical research papers on a rare disease, producing a concise, evidence-based review for a pharmaceutical R&D team in hours."
Metadata
Specification
State
Deprecated
Architecture
Calibrated
Yes
Mixture of Experts
Yes
Total Parameters
1000B
Activated Parameters
50B
Reasoning
No
Precision
FP8
Context length
131K
Max Tokens
Compare with Other Models
See how this model stacks up against others.

inclusionAI
chat
Ling-flash-2.0
Release on: Sep 18, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.14
/ M Tokens
Output:
$
0.57
/ M Tokens

inclusionAI
chat
Ling-mini-2.0
Release on: Sep 10, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.07
/ M Tokens
Output:
$
0.28
/ M Tokens

inclusionAI
chat
Ring-flash-2.0
Release on: Sep 29, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.14
/ M Tokens
Output:
$
0.57
/ M Tokens

inclusionAI
chat
Ling-1T
Release on: Oct 11, 2025
Total Context:
131K
Max output:
Input:
$
0.57
/ M Tokens
Output:
$
2.28
/ M Tokens

inclusionAI
chat
Ring-1T
Release on: Oct 14, 2025
Total Context:
131K
Max output:
Input:
$
0.57
/ M Tokens
Output:
$
2.28
/ M Tokens
