
Model Comparison
vs
step3
Feb 28, 2026

Pricing
Input
0.57
Output
1.42
Metadata
Specification
State
Deprecated
Architecture
Mixture-of-Experts (MoE) architecture with Multi-Matrix Factorization Attention (MFA) and Attention-FFN Disaggregation (AFD)
Calibrated
Yes
No
Mixture of Experts
Yes
Yes
Total Parameters
321B
Activated Parameters
38B
Reasoning
Yes
No
Precision
FP8
Context length
66K
Max Tokens
66K
Supported Functionality
Serverless
Supported
Supported
Serverless LoRA
Supported
Not supported
Fine-tuning
Supported
Not supported
Embeddings
Supported
Supported
Rerankers
Supported
Not supported
Support image input
Not supported
Not supported
JSON Mode
Supported
Supported
Structured Outputs
Supported
Not supported
Tools
Supported
Supported
Fim Completion
Supported
Not supported
Chat Prefix Completion
Supported
Not supported
in Comparison
See how compares with other popular models across key dimensions.
VS

Step-3.5-Flash
VS

Qwen3-VL-32B-Instruct
VS

Qwen3-VL-32B-Thinking
VS

Qwen3-VL-30B-A3B-Instruct
VS

Qwen3-VL-30B-A3B-Thinking
VS

Qwen3-VL-235B-A22B-Instruct
VS

Qwen3-VL-235B-A22B-Thinking
VS

Qwen3-VL-235B-A22B-Instruct
VS

Qwen3-VL-235B-A22B-Thinking
VS

Qwen3-Omni-30B-A3B-Instruct
VS

Ring-flash-2.0
VS

Ring-flash-2.0
VS

Qwen3-Omni-30B-A3B-Captioner
VS

Qwen3-Omni-30B-A3B-Thinking
VS

Qwen3-Next-80B-A3B-Instruct
VS

Qwen3-Next-80B-A3B-Thinking
VS
gpt-oss-120b
VS
gpt-oss-120b
VS

Qwen3-Coder-30B-A3B-Instruct
VS

Qwen3-30B-A3B-Thinking-2507
