
Mistral 7B Instruct: Intelligence, Performance & Price Analysis
Analysis of Mistral's Mistral 7B Instruct and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Note: If hosted by the provider, we track the latest version of Mistral 7B Instruct i.e. v0.3.
Comparison Summary
Intelligence:
Mistral 7B is of lower quality compared to average, with a MMLU score of 0.245 and a Intelligence Index across evaluations of 10.
Price:Mistral 7B is cheaper compared to average with a price of $0.25 per 1M Tokens (blended 3:1).
Mistral 7B Input token price: $0.25, Output token price: $0.25 per 1M Tokens.
Speed:Mistral 7B Input token price: $0.25, Output token price: $0.25 per 1M Tokens.
Mistral 7B is faster compared to average, with a output speed of 123.6 tokens per second.
Latency:Mistral 7B has a lower latency compared to average, taking 0.29s to receive the first token (TTFT).
Context Window:Mistral 7B has a smaller context windows than average, with a context window of 8.2k tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Note: Long prompts not supported as a context window of at least 10k tokens is required
Mistral 7B Instruct Model Details
Comparisons to Mistral 7B
Mistral 7B
GPT-4.1
o3
o4-mini (high)
o3-pro
Llama 4 Scout
Llama 4 Maverick
Gemini 2.5 Flash (Reasoning)
Gemini 2.5 Pro
Claude 4 Opus Thinking
Claude 4 Sonnet
Claude 4 Sonnet Thinking
Magistral Small
DeepSeek R1 0528 (May '25)
DeepSeek V3 0324 (Mar '25)
Grok 3 mini Reasoning (high)
Nova Premier
MiniMax M1 80k
Llama Nemotron Ultra Reasoning
Qwen3 235B (Reasoning)
GPT-4o (Nov '24)
Further details