DeepSeek
Text Generation
DeepSeek-V4-Pro
DeepSeek-V4-Pro is DeepSeek's flagship open-source MoE model with 1.6T total parameters and 49B activated, purpose-built for frontier-level reasoning, coding, and agentic tasks. Supporting a 1M-token context window and three reasoning effort modes up to Think Max, it achieves top-tier performance on coding benchmarks such as LiveCodeBench and Codeforces — rivaling leading closed-source models — and is released under the MIT License....
總上下文:
1049K
最大輸出:
393K
輸入:
$
1.74
/ M Tokens
輸入:
$
text
/ M Tokens
輸出:
$
3.48
/ M Tokens

Z.ai
Text Generation
GLM-5.1
GLM-5.1 is Z.ai's next-generation flagship model built for agentic engineering. It is designed to run continuously for hours or even longer, refining its strategy as it works—the longer it runs, the better the results....
總上下文:
205K
最大輸出:
131K
輸入:
$
1.4
/ M Tokens
輸入:
$
text
/ M Tokens
輸出:
$
4.4
/ M Tokens
DeepSeek
Text Generation
DeepSeek-V3.2
DeepSeek-V3.2 是一個模型,能夠將高計算效率與卓越的推理和代理性能相結合。它的方法建立在三個關鍵技術突破之上:DeepSeek Sparse Attention (DSA),這是一種有效的注意力機制,顯著降低了計算複雜性,同時保持模型性能,特別針對長上下文場景進行了優化;一個可擴展的強化學習框架,使其性能可與 GPT-5 比肩,推理能力則可與其高計算版本的 Gemini-3.0-Pro 並駕齊驅;以及一個大規模代理任務合成管道,用於在使用工具的場景中整合推理,提高在複雜交互環境中的合規性和泛化能力。該模型在 2025 年國際數學奧林匹克(IMO)和國際信息學奧林匹克(IOI)中獲得金牌成績。...
總上下文:
164K
最大輸出:
164K
輸入:
$
0.27
/ M Tokens
輸入:
$
text
/ M Tokens
輸出:
$
0.42
/ M Tokens

