MiniMax has launched a newer model, MiniMax-M2.7, we suggest considering this model instead.
For more information, see Comparison of MiniMax-M2.7 to other models and API provider benchmarks for MiniMax-M2.7.
MiniMax-M2.5 API Provider Benchmarking & Analysis
Analysis of API providers for MiniMax-M2.5 across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include FriendliAI, MiniMax, Eigen AI, Nebius (FP4), Novita, Fireworks, Clarifai, DeepInfra (FP8), Weights & Biases, SambaNova, SiliconFlow (FP8), Together.ai (FP4), Parasail (FP8).
Fastest
Output speed
Total 13 providers
Lowest Latency
Time to first token
Total 13 providers
Lowest Price
Blended price (per 1M tokens)
Total 13 providers
MiniMax-M2.5 is available through 13 API providers, each offering different performance characteristics and pricing. Below is a comparison of the key metrics across providers.
- For output speed, the top providers are SambaNova (398.0 t/s), Eigen AI (175.6 t/s), Fireworks (172.1 t/s). Speed varies significantly across providers, with a 181% difference between the fastest and slowest.
- For latency, Together.ai (FP4) (0.42s), Clarifai (0.56s), FriendliAI (0.63s) offer the lowest time to first token.
- For pricing, SiliconFlow (FP8) (0.40), DeepInfra (FP8) (0.44), FriendliAI (0.53) offer the lowest blended prices per 1M tokens.
Update: Default performance benchmarking workload has updated to 10k input tokens to better reflect production use cases. You can still select different workloads above!
Pricing
Pricing: Input and Output Prices: MiniMax-M2.5 Providers
Speed vs. Price: MiniMax-M2.5 Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed: MiniMax-M2.5 Providers
Latency vs. Output Speed: MiniMax-M2.5 Providers
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token: MiniMax-M2.5 Providers
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: MiniMax-M2.5 Providers
API Features
Function (Tool) Calling & JSON Mode: MiniMax-M2.5 Providers
Context Window: MiniMax-M2.5 Providers
Summary Table of Key Comparison Metrics
Frequently Asked Questions
Common questions about MiniMax-M2.5 providers
MiniMax-M2.5 is available through 13 API providers: FriendliAI, MiniMax, Eigen AI, Nebius (FP4), Novita, Fireworks, Clarifai, DeepInfra (FP8), Weights & Biases, SambaNova, SiliconFlow (FP8), Together.ai (FP4), and Parasail (FP8). Each provider offers different performance characteristics and pricing.
MiniMax-M2.5 is currently available through 13 API providers that we benchmark and track.
The providers with the lowest time to first token for MiniMax-M2.5 are Together.ai (FP4) (0.42s), Clarifai (0.56s), and FriendliAI (0.63s). Lower latency means faster initial response time.
The most affordable providers for MiniMax-M2.5 by blended price are SiliconFlow (FP8) ($0.40 per 1M tokens), DeepInfra (FP8) ($0.44 per 1M tokens), and FriendliAI ($0.53 per 1M tokens). Blended price uses a 3:1 input to output token ratio.
The providers with the lowest input token pricing for MiniMax-M2.5 are SiliconFlow (FP8) ($0.20 per 1M input tokens), DeepInfra (FP8) ($0.27 per 1M input tokens), and FriendliAI ($0.30 per 1M input tokens).
The providers with the lowest output token pricing for MiniMax-M2.5 are DeepInfra (FP8) ($0.95 per 1M output tokens), SiliconFlow (FP8) ($1.00 per 1M output tokens), and FriendliAI ($1.20 per 1M output tokens).
Prices for MiniMax-M2.5 vary up to 1.3x across providers. The most affordable is SiliconFlow (FP8) at $0.40 per 1M tokens, while FriendliAI charges $0.53 per 1M tokens.
Output speed for MiniMax-M2.5 varies significantly across providers. SambaNova is the fastest at 398.0 t/s, which is 11.1x faster than Parasail (FP8) at 35.7 t/s.
11 of 13 providers support JSON mode for MiniMax-M2.5: FriendliAI, Eigen AI, Nebius (FP4), Novita, Fireworks, Clarifai, DeepInfra (FP8), Weights & Biases, SambaNova, SiliconFlow (FP8), and Together.ai (FP4).
12 of 13 providers support function calling for MiniMax-M2.5: FriendliAI, MiniMax, Nebius (FP4), Novita, Fireworks, Clarifai, DeepInfra (FP8), Weights & Biases, SambaNova, SiliconFlow (FP8), Together.ai (FP4), and Parasail (FP8).
The best provider for MiniMax-M2.5 depends on your priorities: SambaNova offers the highest output speed, Together.ai (FP4) has the lowest latency, and SiliconFlow (FP8) provides the most competitive pricing.
When choosing a provider for MiniMax-M2.5, consider: output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and API features like JSON mode or function calling.
Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.
For information about MiniMax-M2.5's intelligence, capabilities, modalities, and how it compares to other models, see the model overview page. View model overview
DeepInfra (FP8)
SambaNova