MiMo-V2-Flash (Feb 2026) vs. MiniMax M1 80k
Comparison between MiMo-V2-Flash (Feb 2026) and MiniMax M1 80k across intelligence, price, speed, context window and more.
For details relating to our methodology, see our Methodology page.
Model Comparison
| Metric | Analysis | ||
|---|---|---|---|
| Creator | |||
| Context Window | 256k tokens (~384 A4 pages of size 12 Arial font) | 1000k tokens (~1500 A4 pages of size 12 Arial font) | MiMo-V2-Flash (Feb 2026) is smaller than MiniMax M1 80k |
| Release Date | December, 2025 | June, 2025 | MiMo-V2-Flash (Feb 2026) has a more recent release date than MiniMax M1 80k |
| Parameters | 309B, 15B active at inference time | 456B, 45.9B active at inference time | MiMo-V2-Flash (Feb 2026) is smaller than MiniMax M1 80k |
| Image Input Support | No | No | Neither MiMo-V2-Flash (Feb 2026) nor MiniMax M1 80k have image input support |
| Open Source (Weights) | Both MiMo-V2-Flash (Feb 2026) and MiniMax M1 80k are open source | ||
| License | |||
| License Supports Commercial Use Without Restrictions | Yes | Yes | Both MiMo-V2-Flash (Feb 2026) and MiniMax M1 80k have license supports commercial use without restrictions |
Intelligence
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Intelligence Evaluations
Openness
Artificial Analysis Openness Index: Results
Intelligence Index Comparisons
Intelligence vs. Price
Intelligence Index Token Use & Cost
Output Tokens Used to Run Artificial Analysis Intelligence Index
Cost to Run Artificial Analysis Intelligence Index
Context Window
Context Window
Pricing
Pricing: Input and Output Prices
Intelligence vs. Price (Log Scale)
Speed
Measured by Output Speed (tokens per second)
Output Speed
Output Speed vs. Price
Latency
Measured by Time (seconds) to First Token
Latency: Time To First Answer Token
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time
Model Size (Open Weights Models Only)
Model Size: Total and Active Parameters
Comparisons to MiMo-V2-Flash (Feb 2026)
MiMo-V2-Flash (Feb 2026)
gpt-oss-20B (high)
gpt-oss-120B (high)
GPT-5.2 (xhigh)
GPT-5.4 (xhigh)
GPT-5.4 Pro (xhigh)
GPT-5.3 Codex (xhigh)
Llama 4 Maverick
Gemini 3.1 Flash-Lite Preview
Gemini 3.1 Pro Preview
Gemini 3 Flash
Claude Sonnet 4.6 (max)
Claude Opus 4.6 (max)
Claude 4.5 Haiku
Mistral Large 3DeepSeek V3.2
Grok 4.1 Fast
Grok 4.20 Beta 0309
Nova 2.0 Pro Preview (medium)
MiniMax-M2.5
NVIDIA Nemotron 3 Super
NVIDIA Nemotron 3 Nano
Kimi K2.5
K-EXAONEK2 Think V2
Mi:dm K 2.5 ProGLM-5
Qwen3.5 397B A17B
Comparisons to MiniMax M1 80k
MiniMax M1 80k
gpt-oss-20B (high)
gpt-oss-120B (high)
GPT-5.2 (xhigh)
GPT-5.4 (xhigh)
GPT-5.4 Pro (xhigh)
GPT-5.3 Codex (xhigh)
Llama 4 Maverick
Gemini 3.1 Flash-Lite Preview
Gemini 3.1 Pro Preview
Gemini 3 Flash
Claude Sonnet 4.6 (max)
Claude Opus 4.6 (max)
Claude 4.5 Haiku
Mistral Large 3DeepSeek V3.2
Grok 4.1 Fast
Grok 4.20 Beta 0309
Nova 2.0 Pro Preview (medium)
MiniMax-M2.5
NVIDIA Nemotron 3 Super
NVIDIA Nemotron 3 Nano
Kimi K2.5
K-EXAONEMiMo-V2-Flash (Feb 2026)
K2 Think V2
Mi:dm K 2.5 ProGLM-5
Qwen3.5 397B A17B
Frequently Asked Questions
Gemini 3.1 Pro Preview currently leads the Artificial Analysis Intelligence Index with a score of 57, out of 295 models evaluated.
The top AI models by Intelligence Index are: 1. Gemini 3.1 Pro Preview (57), 2. GPT-5.4 (xhigh) (57), 3. GPT-5.3 Codex (xhigh) (54), 4. Claude Opus 4.6 (Adaptive Reasoning, Max Effort) (53), 5. Claude Sonnet 4.6 (Adaptive Reasoning, Max Effort) (52).
Mercury 2 is the fastest at 944.0 tokens per second, followed by NVIDIA Nemotron 3 Super 120B A12B (Reasoning) (441.7 t/s) and Granite 3.3 8B (Non-reasoning) (353.2 t/s).
Gemma 3n E4B Instruct is the most affordable at $0.03 per 1M tokens (blended), followed by LFM2 24B A2B ($0.05) and Nova Micro ($0.06).
Gemini 2.5 Flash-Lite Preview (Sep '25) (Non-reasoning) has the lowest time to first token at 0.31s, followed by Apriel-v1.5-15B-Thinker (0.37s) and LFM2 24B A2B (0.41s).
GLM-5 (Reasoning) is the highest-ranked open weights model with an Intelligence Index score of 50. There are 193 open weights models out of 295 total evaluated.
The top open weights AI models by Intelligence Index are: 1. GLM-5 (Reasoning) (50), 2. Kimi K2.5 (Reasoning) (47), 3. Qwen3.5 397B A17B (Reasoning) (45).
Gemini 3.1 Pro Preview leads among 146 reasoning models with an Intelligence Index score of 57. Reasoning models use extended thinking to work through complex problems before providing answers.
Models are compared across multiple dimensions including intelligence (quality), pricing, output speed (tokens per second), latency (time to first token), end-to-end response time, and context window size. Performance metrics are measured directly using standardized prompts across 410 models.
Click on any model name or row in the charts to view its dedicated page with detailed metrics and direct comparisons against similar models. You can also use the model selector to customize which models appear in each chart. View the leaderboard