
Moonshot AI
Text Generation
Kimi-K2.6
Kimi K2.6 is an open-source, native multimodal agentic model by Moonshot AI, achieving open-source state-of-the-art on benchmarks including HLE with tools, SWE-Bench Pro, and BrowseComp. Built on a MoE architecture with 1T total parameters and 32B activated, the model supports a 256K-token context window and multimodal inputs (image and video) via its MoonViT vision encoder. K2.6 is optimized for agentic workloads: it sustains 4,000+ tool calls over 12+ hours of continuous execution, scales to 300 parallel sub-agents × 4,000 steps per run to produce 100+ files from a single prompt, and supports both Thinking and Instant inference modes with function calling and multi-turn Preserve Thinking...
Total Context:
262K
Max output:
262K
Input:
$
0.95
/ M Tokens
Cached Input:
$
text
/ M Tokens
Output:
$
4.0
/ M Tokens

Moonshot AI
Text Generation
Kimi-K2.5
Kimi K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. With a 1T-parameter MoE architecture (32B active) and 256K context length, it seamlessly integrates vision and language understanding with advanced agentic capabilities, supporting both instant and thinking modes, as well as conversational and agentic paradigms...
Total Context:
262K
Max output:
262K
Input:
$
0.23
/ M Tokens
Cached Input:
$
text
/ M Tokens
Output:
$
3.0
/ M Tokens

Moonshot AI
Text Generation
Kimi-K2-Instruct-0905
Kimi K2-Instruct-0905, a state-of-the-art mixture-of-experts (MoE) language model, is the latest, most capable version of Kimi K2. Key Features include enhanced coding capabilities, esp. front-end & tool-calling, context length extended to 256k tokens, and improved integration with various agent scaffolds....
Total Context:
262K
Max output:
262K
Input:
$
0.4
/ M Tokens
Cached Input:
$
text
/ M Tokens
Output:
$
2
/ M Tokens

Moonshot AI
Text Generation
Kimi-K2-Instruct
Kimi K2 is a Mixture-of-Experts (MoE) foundation model with exceptional coding and agent capabilities, featuring 1 trillion total parameters and 32 billion activated parameters. In benchmark evaluations covering general knowledge reasoning, programming, mathematics, and agent-related tasks, the K2 model outperforms other leading open-source models...
Total Context:
131K
Max output:
131K
Input:
$
0.58
/ M Tokens
Cached Input:
$
text
/ M Tokens
Output:
$
2.29
/ M Tokens

