Kimi-K2-Thinking
About Kimi-K2-Thinking
Kimi K2 Thinking is the latest, most capable version of open-source thinking model. Starting with Kimi K2, we built it as a thinking agent that reasons step-by-step while dynamically invoking tools. It sets a new state-of-the-art on Humanity's Last Exam (HLE), BrowseComp, and other benchmarks by dramatically scaling multi-step reasoning depth and maintaining stable tool-use across 200–300 sequential calls. At the same time, K2 Thinking is a native INT4 quantization model with 262k context window, achieving lossless reductions in inference latency and GPU memory usage
Available Serverless
Run queries immediately, pay only for usage
$
0.55
/
$
2.5
Per 1M Tokens (input/output)
Metadata
Specification
State
Available
Architecture
Calibrated
Yes
Mixture of Experts
Yes
Total Parameters
1000B
Activated Parameters
32B
Reasoning
No
Precision
FP8
Context length
262K
Max Tokens
262K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Supported
Structured Outputs
Not supported
Tools
Supported
Fim Completion
Not supported
Chat Prefix Completion
Supported
SiliconFlow Service
Comprehensive solutions to deploy and scale your AI applications with maximum flexibility
60%
lower latency
2x
higher throughput
65%
cost savings
Compare with Other Models
See how this model stacks up against others.

Moonshot AI
chat
Kimi-K2-Thinking
Release on: Nov 7, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.55
/ M Tokens
Output:
$
2.5
/ M Tokens

Moonshot AI
chat
Kimi-K2-Instruct-0905
Release on: Sep 8, 2025
Total Context:
262K
Max output:
262K
Input:
$
0.4
/ M Tokens
Output:
$
2.0
/ M Tokens

Moonshot AI
chat
Kimi-K2-Instruct
Release on: Jul 13, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.58
/ M Tokens
Output:
$
2.29
/ M Tokens

Moonshot AI
chat
Kimi-Dev-72B
Release on: Jun 19, 2025
Total Context:
131K
Max output:
131K
Input:
$
0.29
/ M Tokens
Output:
$
1.15
/ M Tokens
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
