Mistral: Models Intelligence, Performance & Price

Mistral
Mistral

Analysis of Mistral's models across key metrics including quality, price, output speed, latency, context window & more. This analysis is intended to support you in choosing the best model provided by Mistral for your use-case.

Most Intelligent

#1
Mistral Medium 3.5
Mistral Medium 3.5
39
#2
Mistral Small 4
Mistral Small 4
28
#3
Magistral Medium 1.2
Magistral Medium 1.2
27
#4
Mistral Large 3
Mistral Large 3
23
#5
Devstral 2
Devstral 2
22

Intelligence index

Total 24 models

Fastest

#1
Ministral 3 3B
Ministral 3 3B
282 t/s
#2
Devstral Small
Devstral Small
225 t/s
#3
Mistral Medium 3.5
Mistral Medium 3.5
168 t/s
#4
Mistral Small 3.1
Mistral Small 3.1
163 t/s
#5
Mistral 7B
Mistral 7B
162 t/s

Output speed

Total 24 models

Lowest Price

#1
Ministral 3 3B
Ministral 3 3B
$0.10
#2
Ministral 3 8B
Ministral 3 8B
$0.15
#3
Devstral Small
Devstral Small
$0.15
#4
Mistral Small 3.2
Mistral Small 3.2
$0.15
#5
Mistral Small 3.1
Mistral Small 3.1
$0.15

Blended price (per 1M tokens, 3:1 input-output ratio)

Total 24 models

Indicates a reasoning model

Mistral offers 24 models, each with different intelligence, performance, and pricing characteristics. Below is a comparison of the key metrics across models.

  • For intelligence, the top models on Mistral are Mistral Medium 3.5 (39), Mistral Small 4 (28), Magistral Medium 1.2 (27).
  • For output speed, the fastest models are Ministral 3 3B (282 t/s), Devstral Small (225 t/s), Mistral Medium 3.5 (168 t/s). Speed varies significantly across models, with a 74% difference between the fastest and slowest.
  • For latency, Ministral 3 3B (0.51s), Mistral 7B (0.63s), Mistral Small 3.2 (0.64s) offer the lowest time to first token.
  • For pricing, Ministral 3 3B ($0.10), Ministral 3 8B ($0.15), Devstral Small ($0.15) offer the lowest blended prices per 1M tokens.
  • For context window size, Mistral Medium 3.5 (262k), Mistral Large 3 (262k), Devstral 2 (262k) support the largest context windows on Mistral.
  • Ministral 3 3B offers both the fastest output and best pricing, making it attractive for throughput-sensitive and cost-conscious applications. Mistral Medium 3.5 leads in intelligence for tasks that require the highest quality.

Highlights

Intelligence

Artificial Analysis Intelligence Index · Higher is better

Speed

Output tokens per second · Higher is better

Price

USD per 1M tokens (3:1 input-output ratio) · Lower is better

Intelligence Evaluations

Artificial Analysis Intelligence Index

Artificial Analysis Intelligence Index · Higher is better

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Intelligence Evaluations

Intelligence evaluations measured independently by Artificial Analysis · Higher is better
Results claimed by AI Lab (not yet independently verified)
GDPval-AA

Agentic real-world work tasks, (ELO-500)/2000

Terminal-Bench Hard

Agentic coding & terminal use

𝜏²-Bench Telecom

Agentic tool use

AA-LCR

Long context reasoning

Humanity's Last Exam

Reasoning & knowledge

GPQA Diamond

Scientific reasoning

SciCode

Coding

IFBench

Instruction following

CritPt

Physics reasoning

APEX-Agents-AA

Long-horizon agentic tasks

No data available
MMMU-Pro

Visual reasoning

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Intelligence vs. Price

Blended at 7:2:1 (Cache Hit : Input : Output) · Artificial Analysis Intelligence Index · Price: USD per 1M tokens
Most attractive quadrant
Mistral

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Price per token, represented as USD per million Tokens. Price is a blend of cache hit, input, and output token prices using the selected ratio.

The blended bar shown here uses cache hit price only. Other caching costs differ by provider:

  • Anthropic: charges a separate cache write fee, with different rates for 5-minute and 1-hour TTLs (1-hour TTL is more expensive). Blended price charts use Anthropic cache write price for the input leg.
  • Google (Vertex/Gemini): charges a per-hour cache storage fee in addition to cache hit pricing. Some providers also use tiered pricing for prompts above 200K tokens.
  • OpenAI, DeepSeek, others: typically charge only cache hit pricing with no write or storage fee.

See Prompt Caching for the full breakdown.

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Context Window

Context Window

Context window: tokens limit · Higher is better

Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

JSON Mode & Function Calling

Function (Tool) Calling & JSON Mode

ModelsFunction callingJSON Mode
Mistral Medium 3.5, Mistral logoMistral Medium 3.5, Mistral
Mistral Small 4, Mistral logoMistral Small 4, Mistral
Magistral Medium 1.2, Mistral logoMagistral Medium 1.2, Mistral
Mistral Large 3, Mistral logoMistral Large 3, Mistral
Devstral 2, Mistral logoDevstral 2, Mistral
Mistral Medium 3.1, Mistral logoMistral Medium 3.1, Mistral
Devstral Small 2, Mistral logoDevstral Small 2, Mistral
Mistral Medium 3, Mistral logoMistral Medium 3, Mistral
Devstral Medium, Mistral logoDevstral Medium, Mistral
Mistral Small 4, Mistral logoMistral Small 4, Mistral
Magistral Small 1.2, Mistral logoMagistral Small 1.2, Mistral
Ministral 3 14B, Mistral logoMinistral 3 14B, Mistral

Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.

Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.

Pricing

Intelligence vs. Price

Blended at 7:2:1 (Cache Hit : Input : Output) · Artificial Analysis Intelligence Index · Price: USD per 1M tokens
Most attractive quadrant
Mistral

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Price per token, represented as USD per million Tokens. Price is a blend of cache hit, input, and output token prices using the selected ratio.

The blended bar shown here uses cache hit price only. Other caching costs differ by provider:

  • Anthropic: charges a separate cache write fee, with different rates for 5-minute and 1-hour TTLs (1-hour TTL is more expensive). Blended price charts use Anthropic cache write price for the input leg.
  • Google (Vertex/Gemini): charges a per-hour cache storage fee in addition to cache hit pricing. Some providers also use tiered pricing for prompts above 200K tokens.
  • OpenAI, DeepSeek, others: typically charge only cache hit pricing with no write or storage fee.

See Prompt Caching for the full breakdown.

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Performance Summary

Output Speed vs. Price

Output speed: output tokens per second · Price: USD per 1M tokens · 10,000 Input Tokens
Most attractive quadrant
Mistral

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Price per token, represented as USD per million Tokens. Price is a blend of cache hit, input, and output token prices using the selected ratio.

Speed

Measured by Output Speed (tokens per second)

Output Speed

Output tokens per second · Higher is better · 10,000 Input Tokens

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Latency

Measured by Time (seconds) to First Token

Time to First Answer Token

Seconds to first token received · Lower is better

Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.

Seconds to receive a 500 token response. Key components:

  • Input time: Time to receive the first response token
  • Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
  • Answer time: Time to generate 500 output tokens, based on output speed

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

End-to-End Response Time

Seconds to output 500 tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed

End-to-End Response Time vs. Price

End-to-end response time: end-to-end seconds to output 500 tokens · Price: USD per 1M tokens
Most attractive quadrant
Mistral

Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.

Price per token, represented as USD per million Tokens. Price is a blend of cache hit, input, and output token prices using the selected ratio.

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Key definitions

Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.

Price per token, represented as USD per million Tokens. Price is a blend of cache hit, input, and output token prices using the selected ratio.

Price per token generated by the model (received from the API), represented as USD per million Tokens.

Price per token included in the request/message sent to the API, represented as USD per million Tokens.

Metrics are 'live' and are based on the past 72 hours of measurements, measurements are taken 8 times a day for single requests and 2 times per day for parallel requests.

Frequently Asked Questions

Common questions about Mistral

The most intelligent model available on Mistral is Mistral Medium 3.5 with an Intelligence Index score of 39.

The fastest model on Mistral by output speed is Ministral 3 3B at 281.6 tokens per second.

The model with the lowest time to first token on Mistral is Ministral 3 3B at 0.51s. Lower latency means faster initial response time.

The most affordable model on Mistral by blended price is Ministral 3 3B at $0.10 per 1M tokens (3:1 input to output ratio).

Prices on Mistral vary up to 41x across models, from $0.10 per 1M tokens for Ministral 3 3B to $4.09 per 1M tokens for Mistral Medium.

Yes, Mistral offers an OpenAI-compatible API, making it easy to switch from OpenAI or use existing OpenAI SDK integrations.

23 of 24 models on Mistral support JSON mode for structured output.

23 of 24 models on Mistral support function calling (tool use).

Yes, Mistral offers 4 reasoning models: Mistral Medium 3.5, Mistral Small 4, Magistral Medium 1.2, and Magistral Small 1.2. Reasoning models use extended thinking to work through complex problems before providing an answer.

Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.

When choosing a model on Mistral, consider: intelligence (for quality-sensitive tasks), output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and features like context window size, JSON mode, or function calling support.