
Model Comparison
DeepSeek-V3.1-Terminus
vs
Qwen3-235B-A22B-Thinking-2507
Jan 13, 2026

Pricing
Input
$
0.27
/ M Tokens
$
0.13
/ M Tokens
Output
$
1.0
/ M Tokens
$
0.6
/ M Tokens
Metadata
Create on
Sep 22, 2025
Jul 25, 2025
License
MIT
APACHE-2.0
Provider
DeepSeek
Qwen
Specification
State
Available
Available
Architecture
Calibrated
No
Yes
Mixture of Experts
Yes
Yes
Total Parameters
671B
235B
Activated Parameters
236B
22B
Reasoning
No
No
Precision
FP8
FP8
Context length
164K
262K
Max Tokens
164K
262K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Not supported
Not supported
Fine-tuning
Not supported
Not supported
Embeddings
Not supported
Not supported
Rerankers
Not supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Supported
Structured Outputs
Not supported
Not supported
Tools
Supported
Supported
Fim Completion
Not supported
Not supported
Chat Prefix Completion
Supported
Supported
DeepSeek-V3.1-Terminus in Comparison
See how DeepSeek-V3.1-Terminus compares with other popular models across key dimensions.
VS

GLM-4.7
VS
DeepSeek-V3.2
VS

Kimi-K2-Thinking
VS

MiniMax-M2
VS
DeepSeek-V3.2-Exp
VS

GLM-4.6
VS

Qwen3-VL-235B-A22B-Instruct
VS

Qwen3-VL-235B-A22B-Thinking
VS

Kimi-K2-Instruct-0905
VS

step3
VS

Qwen3-235B-A22B-Thinking-2507
VS

Qwen3-Coder-480B-A35B-Instruct
VS

Qwen3-235B-A22B-Instruct-2507
VS

Kimi-K2-Instruct
VS

Kimi-Dev-72B
VS

MiniMax-M1-80k
VS

ERNIE-4.5-300B-A47B
