MiniMax-M2

MiniMax-M2

MiniMaxAI/MiniMax-M2

About MiniMax-M2

MiniMax-M2 redefines efficiency for agents. It's a compact, fast, and cost-effective MoE model (230 billion total parameters with 10 billion active parameters) built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence. With just 10 billion activated parameters, MiniMax-M2 provides the sophisticated, end-to-end tool use performance expected from today's leading models, but in a streamlined form factor that makes deployment and scaling easier than ever

Available Serverless

Run queries immediately, pay only for usage

$

0.3

/

$

1.2

Per 1M Tokens (input/output)

Metadata

Create on

Oct 28, 2025

License

mit

Provider

MiniMaxAI

HuggingFace

Specification

State

Available

Architecture

Calibrated

No

Mixture of Experts

Yes

Total Parameters

230

Activated Parameters

10 billion

Reasoning

No

Precision

FP8

Context length

197K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Supported

Structured Outputs

Not supported

Tools

Supported

Fim Completion

Not supported

Chat Prefix Completion

Supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?