DeepSeek
Text Generation
DeepSeek-V4-Pro
DeepSeek-V4-Pro is DeepSeek's flagship open-source MoE model with 1.6T total parameters and 49B activated, purpose-built for frontier-level reasoning, coding, and agentic tasks. Supporting a 1M-token context window and three reasoning effort modes up to Think Max, it achieves top-tier performance on coding benchmarks such as LiveCodeBench and Codeforces — rivaling leading closed-source models — and is released under the MIT License....
上下文长度:
1049K
最大输出长度:
393K
Input:
$
1.74
/ M Tokens
Input:
$
text
/ M Tokens
Output:
$
3.48
/ M Tokens

Z.ai
Text Generation
GLM-5.1
GLM-5.1 is Z.ai's next-generation flagship model built for agentic engineering. It is designed to run continuously for hours or even longer, refining its strategy as it works—the longer it runs, the better the results....
上下文长度:
205K
最大输出长度:
131K
Input:
$
1.4
/ M Tokens
Input:
$
text
/ M Tokens
Output:
$
4.4
/ M Tokens
DeepSeek
Text Generation
DeepSeek-V3.2
DeepSeek-V3.2 是一个模型,结合了高计算效率与卓越的推理和智能体性能。其方法基于三个关键技术突破:DeepSeek 稀疏注意力(DSA),这是一种有效的注意力机制,在保持模型性能的情况下大幅降低了计算复杂度,特别优化于长上下文场景;一个可扩展的强化学习框架,使得其性能可与 GPT-5 比拟,推理能力与高计算变体的 Gemini-3.0-Pro 相当;以及一个大型智能任务合成管道,将推理整合到工具使用场景中,提高在复杂交互环境中的合规性和泛化能力。该模型在 2025 年国际数学奥林匹克(IMO)和国际信息学奥林匹克竞赛(IOI)中取得了金牌成绩。...
上下文长度:
164K
最大输出长度:
164K
Input:
$
0.27
/ M Tokens
Input:
$
text
/ M Tokens
Output:
$
0.42
/ M Tokens

