MiniMax-M1-80k
About MiniMax-M1-80k
MiniMax-M1 is a open-weight, large-scale hybrid-attention reasoning model with 456 B parameters and 45.9 B activated per token. It natively supports 1 M-token context, lightning attention enabling 75% FLOPs savings vs DeepSeek R1 at 100 K tokens, and leverages a MoE architecture. Efficient RL training with CISPO and hybrid design yields state-of-the-art performance on long-input reasoning and real-world software engineering tasks.
Available Serverless
Run queries immediately, pay only for usage
$
0.55
/
$
2.2
Per 1M Tokens (input/output)
Metadata
Specification
State
Available
Architecture
Calibrated
Yes
Mixture of Experts
Yes
Total Parameters
456B
Activated Parameters
45.9B
Reasoning
No
Precision
FP8
Context length
131K
Max Tokens
131K
Supported Functionality
Serverless
Supported
Serverless LoRA
Not supported
Fine-tuning
Not supported
Embeddings
Not supported
Rerankers
Not supported
Support image input
Not supported
JSON Mode
Not supported
Structured Outputs
Not supported
Tools
Not supported
Fim Completion
Not supported
Chat Prefix Completion
Not supported
SiliconFlow Service
Comprehensive solutions to deploy and scale your AI applications with maximum flexibility
60%
lower latency
2x
higher throughput
65%
cost savings
Compare with Other Models
See how this model stacks up against others.
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.

