Model Comparison

MiniMax-M2

vs

Qwen2.5-VL-72B-Instruct

Feb 28, 2026

Pricing

Input

$

0.3

/ M Tokens

$

0.59

/ M Tokens

Output

$

1.2

/ M Tokens

$

0.59

/ M Tokens

Metadata

Create on

Oct 22, 2025

Jan 27, 2025

License

MIT

-

Provider

MiniMaxAI

Qwen

Specification

State

Deprecated

Available

Architecture

Mixture of Experts

Vision-Language Model (VLM) with a Streamlined and Efficient Vision Encoder (ViT with window attention, SwiGLU, RMSNorm) aligned with the Qwen2.5 LLM structure. Features include Dynamic Resolution and Frame Rate Training for video understanding, mRoPE for temporal sequence and speed, and YaRN for long text context length extrapolation.

Calibrated

No

No

Mixture of Experts

Yes

No

Total Parameters

230B

72B

Activated Parameters

10B

72B

Reasoning

No

No

Precision

FP8

FP8

Context length

197K

131K

Max Tokens

131K

4K

Supported Functionality

Serverless

Supported

Supported

Serverless LoRA

Not supported

Not supported

Fine-tuning

Not supported

Not supported

Embeddings

Not supported

Not supported

Rerankers

Not supported

Not supported

Support image input

Not supported

Not supported

JSON Mode

Supported

Not supported

Structured Outputs

Not supported

Not supported

Tools

Supported

Not supported

Fim Completion

Not supported

Not supported

Chat Prefix Completion

Supported

Supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow