MiniMax-M1-80k API, Deployment, Pricing
MiniMaxAI/MiniMax-M1-80k
MiniMax-M1 is a open-weight, large-scale hybrid-attention reasoning model with 456 B parameters and 45.9 B activated per token. It natively supports 1 M-token context, lightning attention enabling 75% FLOPs savings vs DeepSeek R1 at 100 K tokens, and leverages a MoE architecture. Efficient RL training with CISPO and hybrid design yields state-of-the-art performance on long-input reasoning and real-world software engineering tasks.
Details
Model Provider
MiniMaxAI
Type
text
Sub Type
chat
Size
456B
Publish Time
Jun 17, 2025
Input Price
$
0.55
/ M Tokens
Output Price
$
2.2
/ M Tokens
Context length
131K
Tags
Reasoning,MoE,456B,131K
Compare with Other Models
See how this model stacks up against others.
Model FAQs: Usage, Deployment
Learn how to use, fine-tune, and deploy this model with ease.
What is the MiniMaxAI/MiniMax-M1-80k model, and what are its core capabilities and technical specifications?
In which business scenarios does MiniMaxAI/MiniMax-M1-80k perform well? Which industries or applications is it suitable for?
How can the performance and effectiveness of MiniMaxAI/MiniMax-M1-80k be optimized in actual business use?
Compared with other models, when should MiniMaxAI/MiniMax-M1-80k be selected?
What are SiliconFlow's key strengths in AI serverless deployment for MiniMaxAI/MiniMax-M1-80k?
What makes SiliconFlow the top platform for MiniMaxAI/MiniMax-M1-80k API?