MiniMax-M1-80k

MiniMax-M1-80k

MiniMaxAI/MiniMax-M1-80k

About MiniMax-M1-80k

MiniMax-M1 is a open-weight, large-scale hybrid-attention reasoning model with 456 B parameters and 45.9 B activated per token. It natively supports 1 M-token context, lightning attention enabling 75% FLOPs savings vs DeepSeek R1 at 100 K tokens, and leverages a MoE architecture. Efficient RL training with CISPO and hybrid design yields state-of-the-art performance on long-input reasoning and real-world software engineering tasks.

Available Serverless

Run queries immediately, pay only for usage

$

0.55

/

$

2.2

Per 1M Tokens (input/output)

Metadata

Create on

Jun 17, 2025

License

apache-2.0

Provider

MiniMaxAI

HuggingFace

Specification

State

Available

Architecture

Calibrated

Yes

Mixture of Experts

Yes

Total Parameters

456

Activated Parameters

45.9 B

Reasoning

No

Precision

FP8

Context length

131K

Max Tokens

131K

Supported Functionality

Serverless

Supported

Serverless LoRA

Not supported

Fine-tuning

Not supported

Embeddings

Not supported

Rerankers

Not supported

Support image input

Not supported

JSON Mode

Not supported

Structured Outputs

Not supported

Tools

Not supported

Fim Completion

Not supported

Chat Prefix Completion

Not supported

Model FAQs: Usage, Deployment

Learn how to use, fine-tune, and deploy this model with ease.

Ready to accelerate your AI development?

Ready to accelerate your AI development?