
MiniMaxAI
Text Generation
MiniMax-M2
Release on: Oct 28, 2025
MiniMax-M2 redefines efficiency for agents. It's a compact, fast, and cost-effective MoE model (230 billion total parameters with 10 billion active parameters) built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence. With just 10 billion activated parameters, MiniMax-M2 provides the sophisticated, end-to-end tool use performance expected from today's leading models, but in a streamlined form factor that makes deployment and scaling easier than ever...
Total Context:
197K
Max output:
131K
Input:
$
0.3
/ M Tokens
Output:
$
1.2
/ M Tokens

MiniMaxAI
Text Generation
MiniMax-M1-80k
Release on: Jun 17, 2025
MiniMax-M1 is a open-weight, large-scale hybrid-attention reasoning model with 456 B parameters and 45.9 B activated per token. It natively supports 1 M-token context, lightning attention enabling 75% FLOPs savings vs DeepSeek R1 at 100 K tokens, and leverages a MoE architecture. Efficient RL training with CISPO and hybrid design yields state-of-the-art performance on long-input reasoning and real-world software engineering tasks....
Total Context:
131K
Max output:
131K
Input:
$
0.55
/ M Tokens
Output:
$
2.2
/ M Tokens

