Menu

logo
Artificial Analysis
HOME

o4-mini (high) vs. Codestral (May '24)

Comparison between o4-mini (high) and Codestral (May '24) across intelligence, price, speed, context window and more.
For details relating to our methodology, see our Methodology page.

Highlights

Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:

Model Comparison

Metric
OpenAI logoo4-mini (high)
Mistral logoCodestral (May '24)
Analysis
Creator
OpenAI
Mistral
Context Window
200k tokens (~300 A4 pages of size 12 Arial font)
33k tokens (~49 A4 pages of size 12 Arial font)
o4-mini (high) is larger than Codestral (May '24)
Release Date
April, 2025
May, 2024
o4-mini (high) has a more recent release date than Codestral (May '24)
Image Input Support
No
No
Neither o4-mini (high) nor Codestral (May '24) have image input support
Open Source (Weights)
No
Yes
Codestral (May '24) is open source while o4-mini (high) is proprietary

Intelligence

Artificial Analysis Intelligence Index

Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Artificial Analysis Intelligence Index by Model Type

Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500
Reasoning Model
Non-Reasoning Model
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Artificial Analysis Intelligence Index by Open Weights vs Proprietary

Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500
Proprietary
Open Weights
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Open Weights: Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).

Artificial Analysis Coding Index

Represents the average of coding benchmarks in the Artificial Analysis Intelligence Index (LiveCodeBench & SciCode)
+ Add model from specific provider
Artificial Analysis Coding Index: Represents the average of coding evaluations in the Artificial Analysis Intelligence Index. Currently includes: LiveCodeBench, SciCode. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Artificial Analysis Math Index

Represents the average of math benchmarks in the Artificial Analysis Intelligence Index (AIME 2024 & Math-500)
+ Add model from specific provider
Artificial Analysis Math Index: Represents the average of math evaluations in the Artificial Analysis Intelligence Index. Currently includes: AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Intelligence Evaluations

Intelligence evaluations measured independently by Artificial Analysis; Higher is better
Results claimed by AI Lab (not yet independently verified)
MMLU-Pro (Reasoning & Knowledge)
GPQA Diamond (Scientific Reasoning)
Humanity's Last Exam (Reasoning & Knowledge)
LiveCodeBench (Coding)
SciCode (Coding)
HumanEval (Coding)
MATH-500 (Quantitative Reasoning)
AIME 2024 (Competition Math)
Multilingual Index (Artificial Analysis)
+ Add model from specific provider
While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Intelligence vs. Price

Artificial Analysis Intelligence Index (Version 2, released Feb '25); Price: USD per 1M Tokens
Most attractive quadrant
o4-mini (high)
Codestral (May '24)
+ Add model from specific provider
While higher intelligence models are typically more expensive, they do not all follow the same price-quality curve.
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Intelligence vs. Output Speed

Artificial Analysis Intelligence Index (Version 2, released Feb '25); Output Speed: Output Tokens per Second
Most attractive quadrant
o4-mini (high)
Codestral (May '24)
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Intelligence vs. End-to-End Response Time

Artificial Analysis Intelligence Index (Version 2, released Feb '25); Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
Most attractive quadrant
o4-mini (high)
Codestral (May '24)
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
End-to-End Response Time: Seconds to receive a 500 token response. Key components:
  • Input time: Time to receive the first response token
  • Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
  • Answer time: Time to generate 500 output tokens, based on output speed
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Intelligence Index Token Use & Cost

Output Tokens Used to Run Artificial Analysis Intelligence Index

Tokens used to run all evaluations in the Artificial Analysis Intelligence Index
Reasoning Tokens
Answer Tokens
+ Add model from specific provider
Artificial Analysis Intelligence Index Tokens Use: The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).

Intelligence vs. Output Tokens Used in Artificial Analysis Intelligence Index

Artificial Analysis Intelligence Index (Version 2, released Feb '25); Output Tokens Used in Artificial Analysis Intelligence Index
Most attractive quadrant
o4-mini (high)
Codestral (May '24)
+ Add model from specific provider
Artificial Analysis Intelligence Index Tokens Use: The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Cost to Run Artificial Analysis Intelligence Index

Cost (USD) to run all evaluations in the Artificial Analysis Intelligence Index
Input Cost
Reasoning Cost
Output Cost
+ Add model from specific provider
Cost to Run Artificial Analysis Intelligence Index: The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).

Intelligence vs. Cost to Run Artificial Analysis Intelligence Index

Artificial Analysis Intelligence Index (Version 2, released Feb '25); Cost to Run Intelligence Index
Most attractive quadrant
o4-mini (high)
Codestral (May '24)
+ Add model from specific provider
Cost to Run Artificial Analysis Intelligence Index: The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Context Window

Context Window

Context Window: Tokens Limit; Higher is better
+ Add model from specific provider
Larger context windows are relevant to RAG (Retrieval Augmented Generation) LLM workflows which typically involve reasoning and information retrieval of large amounts of data.
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

Intelligence vs. Context Window

Artificial Analysis Intelligence Index (Version 2, released Feb '25); Context Window: Tokens Limit
Most attractive quadrant
o4-mini (high)
Codestral (May '24)
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

Pricing: Input and Output Prices

Price: USD per 1M Tokens
Input price
Output price
+ Add model from specific provider
Input Price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Pricing: Cached Input Prompts

Price: USD per 1M Tokens
Input (standard)
Cache Write
Cache Hit
Cache Storage per Hour
Output (standard)
+ Add model from specific provider
Input Price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Cache Write: One-time cost charged when storing a prompt in the cache for future reuse, represented as USD per million tokens.
Cache Hit: Price per token for cached prompts (previously processed), typically offering a significant discount compared to regular input price, represented as USD per million tokens.
Cache Storage per Hour: Cost to maintain tokens in cache storage, charged per million tokens per hour. Currently only applicable to Google's Gemini models.
Output Price: Price per token generated by the model (received from the API), represented as USD per million Tokens.

Pricing: Image Input Pricing

Image Input Price: USD per 1k images at 1MP (1024x1024)
+ Add model from specific provider
Price per 1k 1MP images: Price for 1,000 images at a resolution of 1 Megapixel (1024 x 1024) processed by the model.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Performance Summary

Output Speed vs. Price

Output Speed: Output Tokens per Second; Price: USD per 1M Tokens
Most attractive quadrant
o4-mini (high)
Codestral (May '24)
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Latency vs. Output Speed

Latency: Seconds to First Token Received; Output Speed: Output Tokens per Second
Most attractive quadrant
o4-mini (high)
Codestral (May '24)
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Speed

Measured by Output Speed (tokens per second)

Output Speed

Output Tokens per Second; Higher is better
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Output Speed by Input Token Count (Context Length)

Output Tokens per Second; Higher is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Output Speed Variance

Output Tokens per Second; Results by percentile; Higher is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Boxplot: Shows variance of measurements
Picture of the author

Output Speed, Over Time

Output Tokens per Second; Higher is better
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Latency

Measured by Time (seconds) to First Token

Latency: Time To First Answer Token

Seconds to First Answer Token Received; Accounts for Reasoning Model 'Thinking' time
Input processing
Thinking (reasoning models, when applicable)
+ Add model from specific provider
Time To First Answer Token: Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.

Latency: Time To First Token

Seconds to First Token Received; Lower is better
+ Add model from specific provider
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Time to First Token by Input Token Count (Context Length)

Seconds to First Token Received; Lower is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
+ Add model from specific provider
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Time to First Token Variance

Seconds to First Token Received; Results by percentile; Lower is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
+ Add model from specific provider
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Boxplot: Shows variance of measurements
Picture of the author

Time to First Token, Over Time

Seconds to First Token Received; Lower median is better
+ Add model from specific provider
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

End-to-End Response Time

Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed

End-to-End Response Time

Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
Input processing time
'Thinking' time (reasoning models)
Outputting time
+ Add model from specific provider
End-to-End Response Time: Seconds to receive a 500 token response. Key components:
  • Input time: Time to receive the first response token
  • Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
  • Answer time: Time to generate 500 output tokens, based on output speed
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

End-to-End Response Time by Input Token Count (Context Length)

Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
+ Add model from specific provider
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
End-to-End Response Time: Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

End-to-End Response Time, Over Time

Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
+ Add model from specific provider
End-to-End Response Time: Seconds to receive a 500 token response. Key components:
  • Input time: Time to receive the first response token
  • Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
  • Answer time: Time to generate 500 output tokens, based on output speed
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Further details
Model NameFurther analysis
OpenAI
OpenAI logoo1
OpenAI logoo3-mini
OpenAI logoGPT-4o mini
OpenAI logoGPT-4.1 mini
OpenAI logoGPT-4.1 nano
OpenAI logoo1-pro
OpenAI logoGPT-4.1
OpenAI logoo3-mini (high)
OpenAI logoo3
OpenAI logoGPT-4o Realtime (Dec '24)
OpenAI logoGPT-4o (ChatGPT)
OpenAI logoGPT-4o mini Realtime (Dec '24)
OpenAI logoo4-mini (high)
OpenAI logoo1-preview
OpenAI logoo1-mini
OpenAI logoGPT-4o (Aug '24)
OpenAI logoGPT-4o (May '24)
OpenAI logoGPT-4 Turbo
OpenAI logoGPT-4o (Nov '24)
OpenAI logoGPT-3.5 Turbo
OpenAI logoGPT-4
OpenAI logoGPT-4o (March 2025, chatgpt-4o-latest)
OpenAI logoGPT-4.5 (Preview)
Meta
Meta logoLlama 3.3 Instruct 70B
Meta logoLlama 3.1 Instruct 405B
Meta logoLlama 3.2 Instruct 90B (Vision)
Meta logoLlama 3.2 Instruct 11B (Vision)
Meta logoLlama 4 Maverick
Meta logoLlama 4 Scout
Meta logoLlama 3.1 Instruct 70B
Meta logoLlama 3.1 Instruct 8B
Meta logoLlama 3.2 Instruct 3B
Meta logoLlama 3 Instruct 70B
Meta logoLlama 3 Instruct 8B
Meta logoLlama 3.2 Instruct 1B
Meta logoLlama 2 Chat 13B
Meta logoLlama 2 Chat 70B
Meta logoLlama 2 Chat 7B
Meta logoLlama 65B
Google
Google logoPALM-2
Google logoGemini 2.0 Pro Experimental (Feb '25)
Google logoGemini 2.0 Flash (Feb '25)
Google logoGemini 2.5 Flash Preview (Reasoning)
Google logoGemini 2.5 Flash Preview
Google logoGemma 3 4B Instruct
Google logoGemma 3 27B Instruct
Google logoGemini 2.0 Flash-Lite (Feb '25)
Google logoGemini 2.0 Flash Thinking Experimental (Jan '25)
Google logoGemma 3 1B Instruct
Google logoGemini 2.5 Pro Preview (Mar' 25)
Google logoGemma 3 12B Instruct
Google logoGemini 2.0 Flash (experimental)
Google logoGemini 1.5 Pro (Sep '24)
Google logoGemini 2.0 Flash-Lite (Preview)
Google logoGemini 1.5 Flash (Sep '24)
Google logoGemini 1.5 Pro (May '24)
Google logoGemma 2 27B
Google logoGemma 2 9B
Google logoGemini 1.5 Flash-8B
Google logoGemini 1.0 Pro
Google logoGemini 1.5 Flash (May '24)
Google logoGemini 1.0 Ultra
Google logoGemini 2.0 Flash Thinking Experimental (Dec '24)
Anthropic
Anthropic logoClaude 3.5 Haiku
Anthropic logoClaude 3.7 Sonnet (Standard)
Anthropic logoClaude 3.7 Sonnet (Extended Thinking)
Anthropic logoClaude 3.5 Sonnet (Oct '24)
Anthropic logoClaude 3.5 Sonnet (June '24)
Anthropic logoClaude 3 Opus
Anthropic logoClaude 3 Sonnet
Anthropic logoClaude 3 Haiku
Anthropic logoClaude Instant
Anthropic logoClaude 2.1
Anthropic logoClaude 2.0
Mistral
Mistral logoPixtral Large
Mistral logoMistral Large 2 (Nov '24)
Mistral logoMixtral 8x22B Instruct
Mistral logoMinistral 8B
Mistral logoMistral NeMo
Mistral logoMinistral 3B
Mistral logoMixtral 8x7B Instruct
Mistral logoMistral Small 3.1
Mistral logoCodestral (Jan '25)
Mistral logoMistral Saba
Mistral logoMistral Large 2 (Jul '24)
Mistral logoMistral Small 3
Mistral logoMistral Small (Sep '24)
Mistral logoMistral Small (Feb '24)
Mistral logoPixtral 12B (2409)
Mistral logoMistral Large (Feb '24)
Mistral logoCodestral-Mamba
Mistral logoMistral 7B Instruct
Mistral logoCodestral (May '24)
Mistral logoMistral Medium
DeepSeek
DeepSeek logoDeepSeek R1
DeepSeek logoDeepSeek R1 Distill Llama 70B
DeepSeek logoDeepSeek V3 0324 (Mar' 25)
DeepSeek logoDeepSeek R1 Distill Qwen 1.5B
DeepSeek logoDeepSeek R1 Distill Qwen 14B
DeepSeek logoDeepSeek R1 Distill Qwen 32B
DeepSeek logoDeepSeek R1 Distill Llama 8B
DeepSeek logoDeepSeek V3 (Dec '24)
DeepSeek logoDeepSeek-V2.5 (Dec '24)
DeepSeek logoDeepSeek-Coder-V2
DeepSeek logoDeepSeek LLM 67B Chat (V1)
DeepSeek logoDeepSeek-V2.5
DeepSeek logoDeepSeek-V2-Chat
DeepSeek logoDeepSeek Coder V2 Lite Instruct
Perplexity
Perplexity logoSonar Pro
Perplexity logoSonar Reasoning
Perplexity logoSonar
Perplexity logoR1 1776
Perplexity logoSonar Reasoning Pro
xAI
xAI logoGrok 3 mini Reasoning (high)
xAI logoGrok 3 Reasoning Beta
xAI logoGrok 3
xAI logoGrok 3 mini Reasoning (Low)
xAI logoGrok Beta
xAI logoGrok 2 (Dec '24)
Amazon
Amazon logoNova Pro
Amazon logoNova Lite
Amazon logoNova Micro
Amazon logoNova Premier
Microsoft Azure
Microsoft Azure logoPhi-4
Microsoft Azure logoPhi-3 Mini Instruct 3.8B
Microsoft Azure logoPhi-4 Multimodal Instruct
Microsoft Azure logoPhi-4 Mini Instruct
Microsoft Azure logoPhi-3 Medium Instruct 14B
Liquid AI
Liquid AI logoLFM 40B
Databricks
Databricks logoDBRX Instruct
MiniMax
MiniMax logoMiniMax-Text-01
NVIDIA
NVIDIA logoLlama 3.1 Nemotron Instruct 70B
NVIDIA logoLlama 3.3 Nemotron Super 49B v1
NVIDIA logoLlama 3.1 Nemotron Ultra 253B v1 (Reasoning)
NVIDIA logoLlama 3.3 Nemotron Super 49B v1 (Reasoning)
Allen Institute for AI
Allen Institute for AI logoLlama 3.1 Tulu3 405B
Reka AI
Reka AI logoReka Flash (Sep '24)
Reka AI logoReka Core
Reka AI logoReka Flash (Feb '24)
Reka AI logoReka Edge
Reka AI logoReka Flash 3
Nous Research
Nous Research logoDeepHermes 3 - Llama-3.1 8B Preview
Nous Research logoDeepHermes 3 - Mistral 24B Preview
Nous Research logoHermes 3 - Llama-3.1 70B
Cohere
Cohere logoCommand A
Cohere logoAya Expanse 32B
Cohere logoAya Expanse 8B
Cohere logoCommand-R+ (Aug '24)
Cohere logoCommand-R+ (Apr '24)
Cohere logoCommand-R (Mar '24)
Cohere logoCommand-R (Aug '24)
AI21 Labs
AI21 Labs logoJamba 1.6 Mini
AI21 Labs logoJamba 1.6 Large
AI21 Labs logoJamba 1.5 Large
AI21 Labs logoJamba 1.5 Mini
AI21 Labs logoJamba Instruct
Snowflake
Snowflake logoArctic Instruct
Alibaba
Alibaba logoQwQ 32B
Alibaba logoQwen2.5 Coder Instruct 7B
Alibaba logoQwen3 235B A22B (Reasoning)
Alibaba logoQwen3 30B A3B (Reasoning)
Alibaba logoQwen3 8B (Reasoning)
Alibaba logoQwen3 1.7B (Reasoning)
Alibaba logoQwen3 4B (Reasoning)
Alibaba logoQwen3 14B (Reasoning)
Alibaba logoQwen3 32B (Reasoning)
Alibaba logoQwen3 0.6B (Reasoning)
Alibaba logoQwen2.5 Max
Alibaba logoQwen2.5 Instruct 72B
Alibaba logoQwen2.5 Coder Instruct 32B
Alibaba logoQwen Chat 14B
Alibaba logoQwen Turbo
Alibaba logoQwen2 Instruct 72B
Alibaba logoQwQ 32B-Preview
Alibaba logoQwen1.5 Chat 110B
Alibaba logoQwen Chat 72B
Alibaba logoQwen2.5 Instruct 32B
01.AI
01.AI logoYi-Large
OpenChat
OpenChat logoOpenChat 3.5 (1210)
Upstage
Upstage logoSolar Mini

Models compared: OpenAI: GPT 4o Audio, GPT 4o Realtime, GPT 4o Speech Pipeline, GPT-3.5 Turbo, GPT-3.5 Turbo (0125), GPT-3.5 Turbo (0314), GPT-3.5 Turbo (1106), GPT-3.5 Turbo Instruct, GPT-4, GPT-4 Turbo, GPT-4 Turbo (0125), GPT-4 Turbo (1106), GPT-4 Vision, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano, GPT-4.5 (Preview), GPT-4o (April 2025), GPT-4o (Aug '24), GPT-4o (ChatGPT), GPT-4o (March 2025), GPT-4o (May '24), GPT-4o (Nov '24), GPT-4o Realtime (Dec '24), GPT-4o mini, GPT-4o mini Realtime (Dec '24), o1, o1-mini, o1-preview, o1-pro, o3, o3-mini, o3-mini (high), and o4-mini (high), Meta: Code Llama 70B, Llama 2 Chat 13B, Llama 2 Chat 70B, Llama 2 Chat 7B, Llama 3 70B, Llama 3 8B, Llama 3.1 405B, Llama 3.1 70B, Llama 3.1 8B, Llama 3.2 11B (Vision), Llama 3.2 1B, Llama 3.2 3B, Llama 3.2 90B (Vision), Llama 3.3 70B, Llama 4 Behemoth, Llama 4 Maverick, Llama 4 Scout, and Llama 65B, Google: Gemini 1.0 Pro, Gemini 1.0 Ultra, Gemini 1.5 Flash (May), Gemini 1.5 Flash (Sep), Gemini 1.5 Flash-8B, Gemini 1.5 Pro (May), Gemini 1.5 Pro (Sep), Gemini 2.0 Flash, Gemini 2.0 Flash (exp), Gemini 2.0 Flash Thinking exp. (Dec '24), Gemini 2.0 Flash Thinking exp. (Jan '25), Gemini 2.0 Flash-Lite (Feb '25), Gemini 2.0 Flash-Lite (Preview), Gemini 2.0 Pro Experimental, Gemini 2.5 Flash, Gemini 2.5 Flash (Reasoning), Gemini 2.5 Pro, Gemini 2.5 Pro Preview (May' 25), Gemini Experimental (Nov), Gemma 2 27B, Gemma 2 9B, Gemma 3 12B, Gemma 3 1B, Gemma 3 27B, Gemma 3 4B, Gemma 7B, and PALM-2, Anthropic: Claude 2.0, Claude 2.1, Claude 3 Haiku, Claude 3 Opus, Claude 3 Sonnet, Claude 3.5 Haiku, Claude 3.5 Sonnet (June), Claude 3.5 Sonnet (Oct), Claude 3.7 Sonnet Thinking, Claude 3.7 Sonnet, and Claude Instant, Mistral: Codestral (Jan '25), Codestral (May '24), Codestral-Mamba, Ministral 3B, Ministral 8B, Mistral 7B, Mistral Large (Feb '24), Mistral Large 2 (Jul '24), Mistral Large 2 (Nov '24), Mistral Medium, Mistral Medium 3, Mistral NeMo, Mistral Saba, Mistral Small (Feb '24), Mistral Small (Sep '24), Mistral Small 3, Mistral Small 3.1, Mixtral 8x22B, Mixtral 8x7B, Pixtral 12B, and Pixtral Large, DeepSeek: DeepSeek Coder V2 Lite, DeepSeek LLM 67B (V1), DeepSeek Prover V2 671B, DeepSeek R1, DeepSeek R1 (FP4), DeepSeek R1 Distill Llama 70B, DeepSeek R1 Distill Llama 8B, DeepSeek R1 Distill Qwen 1.5B, DeepSeek R1 Distill Qwen 14B, DeepSeek R1 Distill Qwen 32B, DeepSeek V3 (Dec '24), DeepSeek V3 (Mar' 25), DeepSeek-Coder-V2, DeepSeek-V2, DeepSeek-V2.5, DeepSeek-V2.5 (Dec '24), DeepSeek-VL2, and Janus Pro 7B, Perplexity: PPLX-70B Online, PPLX-7B-Online, R1 1776, Sonar, Sonar 3.1 Huge, Sonar 3.1 Large, Sonar 3.1 Small , Sonar Large, Sonar Pro, Sonar Reasoning, Sonar Reasoning Pro, and Sonar Small, xAI: Grok 2, Grok 3, Grok 3 Reasoning Beta, Grok 3 mini, Grok 3 mini Reasoning (low), Grok 3 mini Reasoning (high), Grok Beta, and Grok-1, OpenChat: OpenChat 3.5, Amazon: Nova Lite, Nova Micro, Nova Premier, and Nova Pro, Microsoft Azure: Phi-3 Medium 14B, Phi-3 Mini, Phi-4, Phi-4 Mini, Phi-4 Multimodal, Phi-4 mini reasoning, Phi-4 reasoning, and Phi-4 reasoning plus, Liquid AI: LFM 1.3B, LFM 3B, and LFM 40B, Upstage: Solar Mini, Solar Pro, and Solar Pro (Nov '24), Databricks: DBRX, MiniMax: MiniMax-Text-01, NVIDIA: Cosmos Nemotron 34B, Llama 3.1 Nemotron 70B, Llama 3.1 Nemotron Nano 8B, Llama 3.3 Nemotron Nano 8B v1 (Reasoning), Llama 3.1 Nemotron Ultra 253B Reasoning, Llama 3.3 Nemotron Super 49B v1, and Llama 3.3 Nemotron Super 49B Reasoning, IBM: Granite 3.0 2B, OpenVoice: Granite 3.0 8B, Inceptionlabs: Mercury Coder Mini, Mercury Coder Small, and Mercury Instruct, Reka AI: Reka Core, Reka Edge, Reka Flash (Feb '24), Reka Flash, and Reka Flash 3, Xiaomi: MiMo 7B RL, Other: LLaVA-v1.5-7B, Cohere: Aya Expanse 32B, Aya Expanse 8B, Command, Command A, Command Light, Command R7B, Command-R, Command-R (Mar '24), Command-R+ (Apr '24), and Command-R+, Bytedance: Skylark Lite and Skylark Pro, AI21 Labs: Jamba 1.5 Large, Jamba 1.5 Large (Feb '25), Jamba 1.5 Mini, Jamba 1.5 Mini (Feb 2025), Jamba 1.6 Large, Jamba 1.6 Mini, and Jamba Instruct, Snowflake: Arctic and Snowflake Llama 3.3 70B, Alibaba: QwQ-32B, QwQ 32B-Preview, Qwen Chat 14B, Qwen Chat 72B, Qwen Plus, Qwen Turbo, Qwen1.5 Chat 110B, Qwen1.5 Chat 14B, Qwen1.5 Chat 32B, Qwen1.5 Chat 72B, Qwen1.5 Chat 7B, Qwen2 72B, Qwen2 Instruct 7B, Qwen2 Instruct A14B 57B, Qwen2-VL 72B, Qwen2.5 Coder 32B, Qwen2.5 Coder 7B , Qwen2.5 Instruct 14B, Qwen2.5 Instruct 32B, Qwen2.5 72B, Qwen2.5 Instruct 7B, Qwen2.5 Max, Qwen2.5 Max 01-29, Qwen2.5 Omni 7B, Qwen2.5 VL 72B, Qwen2.5 VL 7B, Qwen3 0.6B, Qwen3 0.6B (Reasoning), Qwen3 1.7B, Qwen3 1.7B (Reasoning), Qwen3 14B, Qwen3 14B (Reasoning), Qwen3 235B A22B, Qwen3 235B A22B (Reasoning), Qwen3 30B A3B, Qwen3 30B A3B (Reasoning), Qwen3 32B, Qwen3 32B (Reasoning), Qwen3 4B, Qwen3 4B (Reasoning), Qwen3 8B, and Qwen3 8B (Reasoning), and 01.AI: Yi-Large.