
Mistral: Models Intelligence, Performance & Price
Mistral Model Comparison Summary
Magistral Medium 1.2 and
Mistral Large 3 are the highest intelligence models offered by Mistral, followed by
Devstral 2,
Mistral Medium 3.1 &
Devstral Small 2.Output Speed (tokens/s):
Ministral 3 3B (292 t/s) and
Devstral Small (237 t/s) are the fastest models offered by Mistral, followed by
Mistral Small 3,
Devstral Small 2 &
Devstral Small (May).Latency (seconds):
Ministral 3 3B (0.28s) and
Ministral 3 8B (0.29s) are the lowest latency models offered by Mistral, followed by
Mistral Small 3.1,
Mistral Small (Feb) &
Ministral 3 14B.Blended Price ($/M tokens):
Devstral 2 ($0.00) and
Devstral Small 2 ($0.00) are the cheapest models offered by Mistral, followed by
Ministral 3 3B,
Mistral Small 3.2 &
Ministral 3 8B.Context Window Size:
Ministral 3 14B (256k) and
Ministral 3 8B (256k) are the largest context window models offered by Mistral, followed by
Ministral 3 3B,
Mistral Large 3 &
Devstral 2.Highlights
Intelligence Evaluations
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Intelligence Evaluations
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Intelligence vs. Price
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Context Window
Context Window
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
JSON Mode & Function Calling
Function (Tool) Calling & JSON Mode
Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.
Pricing
Intelligence vs. Price
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Performance Summary
Output Speed vs. Price
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Speed
Measured by Output Speed (tokens per second)
Output Speed
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token
Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time vs. Price
Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Key definitions
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Metrics are 'live' and are based on the past 72 hours of measurements, measurements are taken 8 times a day for single requests and 2 times per day for parallel requests.