MiniMax-M2.5

MiniMax-M2.5

MiniMaxAI/MiniMax-M2.5

About MiniMax-M2.5

MiniMax-M2.5 is MiniMax's latest large language model, extensively trained with reinforcement learning across hundreds of thousands of complex real-world environments. Built on a 229B-parameter MoE architecture, it achieves SOTA performance in coding, agentic tool use, search, and office work, scoring 80.2% on SWE-Bench Verified with 37% faster inference than M2.1

Available Serverless

Run queries immediately, pay only for usage

Input Price

$

0.3

/ M Tokens

Cache Read

$

0.03

/ M Tokens

Output Price

$

1.2

/ M Tokens

Metadata

Create on

License

MODIFIED-MIT

Provider

MiniMaxAI

HuggingFace

Specification

State

Available

Architecture

MoE

Calibrated

No

Mixture of Experts

Yes

Total Parameters

229B

Activated Parameters

Reasoning

No

Precision

FP8

Context length

197K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Ready to accelerate your AI development?

Ready to accelerate your AI development?

Ready to accelerate your AI development?