Qwen2 Instruct 72B: Intelligence, Performance & Price Analysis
Analysis of Alibaba's Qwen2 Instruct 72B and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Comparison Summary
Intelligence:
Qwen2 72B is of lower quality compared to average, with a MMLU score of 0.622 and a Intelligence Index across evaluations of 33.
Price:Speed:Qwen2 72B is slower compared to average, with a output speed of 30.9 tokens per second.
Latency:Qwen2 72B has a lower latency compared to average, taking 1.39s to receive the first token (TTFT).
Context Window:Qwen2 72B has a smaller context windows than average, with a context window of 130k tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Qwen2 Instruct 72B Model Details
Intelligence
Artificial Analysis Intelligence Index
Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Artificial Analysis Intelligence Index by Model Type
Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500
Reasoning Model
Non-Reasoning Model
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Artificial Analysis Intelligence Index by Open Weights vs Proprietary
Intelligence Index incorporates 7 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500
Proprietary
Open Weights (Commercial Use Restricted)
Open Weights
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Open Weights: Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).
Artificial Analysis Coding Index
Represents the average of coding benchmarks in the Artificial Analysis Intelligence Index (LiveCodeBench & SciCode)
+ Add model from specific provider
Artificial Analysis Coding Index: Represents the average of coding evaluations in the Artificial Analysis Intelligence Index. Currently includes: LiveCodeBench, SciCode. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Artificial Analysis Math Index
Represents the average of math benchmarks in the Artificial Analysis Intelligence Index (AIME 2024 & Math-500)
+ Add model from specific provider
Artificial Analysis Math Index: Represents the average of math evaluations in the Artificial Analysis Intelligence Index. Currently includes: AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Intelligence Evaluations
Intelligence evaluations measured independently by Artificial Analysis; Higher is better
Results claimed by AI Lab (not yet independently verified)
MMLU-Pro (Reasoning & Knowledge)
GPQA Diamond (Scientific Reasoning)
Humanity's Last Exam (Reasoning & Knowledge)
LiveCodeBench (Coding)
SciCode (Coding)
HumanEval (Coding)
MATH-500 (Quantitative Reasoning)
AIME 2024 (Competition Math)
Multilingual Index (Artificial Analysis)
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Intelligence vs. Price
Artificial Analysis Intelligence Index (Version 2, released Feb '25); Price: USD per 1M Tokens
Most attractive quadrant
GPT-4o (Nov '24)
GPT-4.1
o4-mini (high)
o3-mini (high)
GPT-4.1 mini
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash
Gemini 2.5 Pro Preview
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 (Mar' 25)
Grok 3 mini Reasoning (high)
Grok 3
Nova Pro
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Intelligence vs. Output Speed
Artificial Analysis Intelligence Index (Version 2, released Feb '25); Output Speed: Output Tokens per Second
Most attractive quadrant
GPT-4o (Nov '24)
GPT-4.1
o4-mini (high)
o3-mini (high)
GPT-4.1 mini
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash
Gemini 2.5 Pro Preview
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 (Mar' 25)
Grok 3 mini Reasoning (high)
Grok 3
Nova Pro
Qwen2 72B
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Intelligence vs. End-to-End Response Time
Artificial Analysis Intelligence Index (Version 2, released Feb '25); Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
Most attractive quadrant
GPT-4o (Nov '24)
GPT-4.1
o4-mini (high)
o3-mini (high)
GPT-4.1 mini
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash
Gemini 2.5 Pro Preview
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 (Mar' 25)
Grok 3 mini Reasoning (high)
Grok 3
Nova Pro
Qwen2 72B
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
End-to-End Response Time: Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Intelligence Index Token Use & Cost
Output Tokens Used to Run Artificial Analysis Intelligence Index
Tokens used to run all evaluations in the Artificial Analysis Intelligence Index
Reasoning Tokens
Answer Tokens
+ Add model from specific provider
Artificial Analysis Intelligence Index Tokens Use: The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
Intelligence vs. Output Tokens Used in Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index (Version 2, released Feb '25); Output Tokens Used in Artificial Analysis Intelligence Index
Most attractive quadrant
GPT-4o (Nov '24)
GPT-4.1
o4-mini (high)
o3-mini (high)
GPT-4.1 mini
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash
Gemma 3 27B
Gemini 2.5 Pro Preview
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 (Mar' 25)
Grok 3
Nova Pro
Qwen2 72B
+ Add model from specific provider
Artificial Analysis Intelligence Index Tokens Use: The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Cost to Run Artificial Analysis Intelligence Index
Cost (USD) to run all evaluations in the Artificial Analysis Intelligence Index
Input Cost
Reasoning Cost
Output Cost
+ Add model from specific provider
Cost to Run Artificial Analysis Intelligence Index: The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Intelligence vs. Cost to Run Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index (Version 2, released Feb '25); Cost to Run Intelligence Index
Most attractive quadrant
GPT-4o (Nov '24)
GPT-4.1
o4-mini (high)
o3-mini (high)
GPT-4.1 mini
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash
Gemini 2.5 Pro Preview
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 (Mar' 25)
Grok 3
Nova Pro
+ Add model from specific provider
Cost to Run Artificial Analysis Intelligence Index: The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Context Window
Context Window
Context Window: Tokens Limit; Higher is better
+ Add model from specific provider
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Intelligence vs. Context Window
Artificial Analysis Intelligence Index (Version 2, released Feb '25); Context Window: Tokens Limit
Most attractive quadrant
GPT-4o (Nov '24)
GPT-4.1
o4-mini (high)
o3-mini (high)
GPT-4.1 mini
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash
Gemma 3 27B
Gemini 2.5 Pro Preview
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 (Mar' 25)
Grok 3 mini Reasoning (high)
Grok 3
Nova Pro
Llama 3.1 Nemotron Ultra 253B Reasoning
Qwen2 72B
+ Add model from specific provider
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Pricing
Pricing: Input and Output Prices
Price: USD per 1M Tokens
Input price
Output price
+ Add model from specific provider
Input Price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Pricing Comparison of Qwen2 Instruct 72B API Providers
Performance Summary
Output Speed vs. Price
Output Speed: Output Tokens per Second; Price: USD per 1M Tokens
Most attractive quadrant
GPT-4o (Nov '24)
GPT-4.1
o4-mini (high)
o3-mini (high)
GPT-4.1 mini
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash
Gemini 2.5 Flash Preview
Gemini 2.5 Pro Preview
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 (Mar' 25)
Grok 3 mini Reasoning (high)
Grok 3
Nova Pro
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Latency vs. Output Speed
Latency: Seconds to First Token Received; Output Speed: Output Tokens per Second
Most attractive quadrant
GPT-4o (Nov '24)
GPT-4.1
o4-mini (high)
o3-mini (high)
GPT-4.1 mini
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash
Gemini 2.5 Flash Preview
Gemini 2.5 Pro Preview
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 (Mar' 25)
Grok 3 mini Reasoning (high)
Grok 3
Nova Pro
Qwen2 72B
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Speed
Measured by Output Speed (tokens per second)
Output Speed
Output Tokens per Second; Higher is better
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Output Speed by Input Token Count (Context Length)
Output Tokens per Second; Higher is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Output Speed Variance
Output Tokens per Second; Results by percentile; Higher is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Boxplot: Shows variance of measurements

Output Speed, Over Time
Output Tokens per Second; Higher is better
+ Add model from specific provider
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Latency
Measured by Time (seconds) to First Token
Latency: Time To First Answer Token
Seconds to First Answer Token Received; Accounts for Reasoning Model 'Thinking' time
Input processing
Thinking (reasoning models, when applicable)
+ Add model from specific provider
Time To First Answer Token: Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
Latency: Time To First Token
Seconds to First Token Received; Lower is better
+ Add model from specific provider
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Time to First Token by Input Token Count (Context Length)
Seconds to First Token Received; Lower is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
+ Add model from specific provider
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Time to First Token Variance
Seconds to First Token Received; Results by percentile; Lower is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
+ Add model from specific provider
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Boxplot: Shows variance of measurements

Time to First Token, Over Time
Seconds to First Token Received; Lower median is better
+ Add model from specific provider
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time
Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
Input processing time
'Thinking' time (reasoning models)
Outputting time
+ Add model from specific provider
End-to-End Response Time: Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
End-to-End Response Time by Input Token Count (Context Length)
Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
+ Add model from specific provider
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
End-to-End Response Time: Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
End-to-End Response Time, Over Time
Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
+ Add model from specific provider
End-to-End Response Time: Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Comparisons to Qwen2 72B
GPT-4o (Nov '24)
GPT-4.1
o4-mini (high)
o3-mini (high)
GPT-4.1 mini
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash (Feb '25)
Gemma 3 27B Instruct
Gemini 2.5 Flash Preview
Gemini 2.5 Pro Preview (Mar' 25)
Claude 3.7 Sonnet (Extended Thinking)
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 0324 (Mar' 25)
Grok 3 mini Reasoning (high)
Grok 3
Nova Pro
Llama 3.1 Nemotron Ultra 253B v1 (Reasoning)
Further details