MiniMax-M2.5

MiniMax-M2.5

MiniMaxAI/MiniMax-M2.5

About MiniMax-M2.5

MiniMax-M2.5 is MiniMax's latest large language model, extensively trained with reinforcement learning across hundreds of thousands of complex real-world environments. Built on a 229B-parameter MoE architecture, it achieves SOTA performance in coding, agentic tool use, search, and office work, scoring 80.2% on SWE-Bench Verified with 37% faster inference than M2.1

Available Serverless

Run queries immediately, pay only for usage

$

0.2

/

$

1.0

Per 1M Tokens (input/output)

Metadata

Create on

Feb 15, 2026

License

MODIFIED-MIT

Provider

MiniMaxAI

HuggingFace

Specification

State

Available

Architecture

Mixture-of-Experts (MoE)

Calibrated

No

Mixture of Experts

Yes

Total Parameters

229B

Activated Parameters

229B

Reasoning

No

Precision

FP8

Context length

197K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow

English

© 2025 SiliconFlow