
Aya Expanse 32B: Intelligence, Performance & Price Analysis
Analysis of Cohere's Aya Expanse 32B and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Comparison Summary
Intelligence:
Aya Expanse 32B is of lower quality compared to average, with a MMLU score of 0.377 and a Intelligence Index across evaluations of 20.
Price:Aya Expanse 32B is cheaper compared to average with a price of $0.75 per 1M Tokens (blended 3:1).
Aya Expanse 32B Input token price: $0.50, Output token price: $1.50 per 1M Tokens.
Speed:Aya Expanse 32B Input token price: $0.50, Output token price: $1.50 per 1M Tokens.
Aya Expanse 32B is faster compared to average, with a output speed of 119.6 tokens per second.
Latency:Aya Expanse 32B has a lower latency compared to average, taking 0.15s to receive the first token (TTFT).
Context Window:Aya Expanse 32B has a smaller context windows than average, with a context window of 130k tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Intelligence
Artificial Analysis Intelligence Index
+ Add model from specific provider
Intelligence Index incorporates 7 evaluations spanning reasoning, knowledge, math & coding
Estimate (independent evaluation forthcoming)
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Artificial Analysis Intelligence Index by Model Type
+ Add model from specific provider
Intelligence Index incorporates 7 evaluations spanning reasoning, knowledge, math & coding
Estimate (independent evaluation forthcoming)
Reasoning Model
Non-Reasoning Model
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Artificial Analysis Intelligence Index by Open Weights vs Proprietary
+ Add model from specific provider
Intelligence Index incorporates 7 evaluations spanning reasoning, knowledge, math & coding
Estimate (independent evaluation forthcoming)
Proprietary
Open Weights
Open Weights (Commercial Use Restricted)
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Open Weights: Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).
Artificial Analysis Coding Index
+ Add model from specific provider
Represents the average of coding benchmarks in the Artificial Analysis Intelligence Index (LiveCodeBench & SciCode)
Artificial Analysis Coding Index: Represents the average of coding evaluations in the Artificial Analysis Intelligence Index. Currently includes: LiveCodeBench, SciCode. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Artificial Analysis Math Index
+ Add model from specific provider
Represents the average of math benchmarks in the Artificial Analysis Intelligence Index (AIME 2024 & Math-500)
Artificial Analysis Math Index: Represents the average of math evaluations in the Artificial Analysis Intelligence Index. Currently includes: AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Intelligence Evaluations
+ Add model from specific provider
Intelligence evaluations measured independently by Artificial Analysis; Higher is better
Results claimed by AI Lab (not yet independently verified)
MMLU-Pro (Reasoning & Knowledge)
GPQA Diamond (Scientific Reasoning)
Humanity's Last Exam (Reasoning & Knowledge)
LiveCodeBench (Coding)
SciCode (Coding)
HumanEval (Coding)
MATH-500 (Quantitative Reasoning)
AIME 2024 (Competition Math)
Multilingual Index (Artificial Analysis)
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Intelligence vs. Price
+ Add model from specific provider
Artificial Analysis Intelligence Index (Version 2, released Feb '25); Price: USD per 1M Tokens
Most attractive quadrant
o1
GPT-4o (Nov '24)
GPT-4o mini
o3-mini (high)
Llama 3.3 70B
Llama 3.1 8B
Gemini 2.0 Flash
Claude 3.7 Sonnet
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
Mistral Small 3.1
DeepSeek R1
DeepSeek V3 (Mar' 25)
Nova Pro
Nova Micro
Command A
Aya Expanse 32B
QwQ-32B
DeepSeek V3
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Intelligence vs. Output Speed
+ Add model from specific provider
Artificial Analysis Intelligence Index (Version 2, released Feb '25); Output Speed: Output Tokens per Second
Most attractive quadrant
o1
GPT-4o (Nov '24)
GPT-4o mini
Llama 3.3 70B
Llama 3.1 8B
Gemini 2.0 Flash
Gemma 3 27B
Claude 3.7 Sonnet
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
Mistral Small 3.1
DeepSeek R1
DeepSeek V3 (Mar' 25)
Nova Pro
Nova Micro
Command A
Aya Expanse 32B
QwQ-32B
DeepSeek V3
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Intelligence vs. Total Response Time
+ Add model from specific provider
Artificial Analysis Intelligence Index (Version 2, released Feb '25); End-to-End Seconds to Output 100 Tokens; Lower is better
Most attractive quadrant
o1
GPT-4o (Nov '24)
GPT-4o mini
Llama 3.3 70B
Llama 3.1 8B
Gemini 2.0 Flash
Gemma 3 27B
Claude 3.7 Sonnet
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
Mistral Small 3.1
DeepSeek R1
DeepSeek V3 (Mar' 25)
Nova Pro
Nova Micro
Command A
Aya Expanse 32B
QwQ-32B
DeepSeek V3
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Total Response Time: Time to receive a 100 token response. Calculated based on Latency (time to receive first token) and Output Speed (output tokens per second).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Context Window
Context Window
+ Add model from specific provider
Context Window: Tokens Limit; Higher is better
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Intelligence vs. Context Window
+ Add model from specific provider
Artificial Analysis Intelligence Index (Version 2, released Feb '25); Context Window: Tokens Limit
Most attractive quadrant
o1
GPT-4o (Nov '24)
GPT-4o mini
o3-mini (high)
Llama 3.3 70B
Llama 3.1 8B
Gemini 2.0 Flash
Gemma 3 27B
Claude 3.7 Sonnet
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
Mistral Small 3.1
DeepSeek R1
DeepSeek V3 (Mar' 25)
Nova Pro
Nova Micro
Command A
Aya Expanse 32B
QwQ-32B
DeepSeek V3
Artificial Analysis Intelligence Index: Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 2 was released in Feb '25 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME, MATH-500. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Pricing
Pricing: Input and Output Prices
+ Add model from specific provider
Price: USD per 1M Tokens
Input price
Output price
Input Price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Pricing Comparison of Aya Expanse 32B API Providers
Performance Summary
Output Speed vs. Price
+ Add model from specific provider
Output Speed: Output Tokens per Second; Price: USD per 1M Tokens
Most attractive quadrant
o1
GPT-4o (Nov '24)
GPT-4o mini
Llama 3.3 70B
Llama 3.1 8B
Gemini 2.0 Flash
Claude 3.7 Sonnet
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
Mistral Small 3.1
DeepSeek R1
DeepSeek V3 (Mar' 25)
Nova Pro
Nova Micro
Command A
Aya Expanse 32B
QwQ-32B
DeepSeek V3
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Latency vs. Output Speed
+ Add model from specific provider
Latency: Seconds to First Token Received; Output Speed: Output Tokens per Second
Most attractive quadrant
o1
GPT-4o (Nov '24)
GPT-4o mini
Llama 3.3 70B
Llama 3.1 8B
Gemini 2.0 Flash
Gemma 3 27B
Gemini 2.5 Pro Experimental
Claude 3.7 Sonnet
Claude 3.7 Sonnet Thinking
Mistral Large 2 (Nov '24)
Mistral Small 3.1
DeepSeek R1
DeepSeek V3 (Mar' 25)
Nova Pro
Nova Micro
Command A
Aya Expanse 32B
QwQ-32B
DeepSeek V3
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Speed
Measured by Output Speed (tokens per second)
Output Speed
+ Add model from specific provider
Output Tokens per Second; Higher is better
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Output Speed by Input Token Count (Context Length)
+ Add model from specific provider
Output Tokens per Second; Higher is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Output Speed Variance
+ Add model from specific provider
Output Tokens per Second; Results by percentile; Higher is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Boxplot: Shows variance of measurements

Output Speed, Over Time
+ Add model from specific provider
Output Tokens per Second; Higher is better
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Latency
Measured by Time (seconds) to First Token
Latency
+ Add model from specific provider
Seconds to First Token Received; Lower is better
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Latency by Input Token Count (Context Length)
+ Add model from specific provider
Seconds to First Token Received; Lower is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Latency Variance
+ Add model from specific provider
Seconds to First Token Received; Results by percentile; Lower is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Boxplot: Shows variance of measurements

Total Response Time
Time to receive 100 tokens output, calculated from latency and output speed metrics
Total Response Time
+ Add model from specific provider
End-to-End Seconds to Output 100 Tokens; Lower is better
Total Response Time: Time to receive a 100 token response. Calculated based on Latency (time to receive first token) and Output Speed (output tokens per second).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Total Response Time by Input Token Count (Context Length)
+ Add model from specific provider
End-to-End Seconds to Output 100 Tokens; Lower is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Total Response Time: Time to receive a 100 token response. Calculated based on Latency (time to receive first token) and Output Speed (output tokens per second).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Total Response Time Variance
+ Add model from specific provider
Total Response Time: End-to-End Seconds to Output 100 Tokens; Results by percentile; Lower is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Total Response Time: Time to receive a 100 token response. Calculated based on Latency (time to receive first token) and Output Speed (output tokens per second).
Boxplot: Shows variance of measurements

Comparisons to Aya Expanse 32B
o1
GPT-4o (Nov '24)
GPT-4o mini
o3-mini (high)
o1-pro
Llama 3.3 Instruct 70B
Llama 3.1 Instruct 8B
Gemini 2.0 Flash (Feb '25)
Gemma 3 27B Instruct
Gemini 2.5 Pro Experimental (Mar' 25)
Claude 3.7 Sonnet (Standard)
Claude 3.7 Sonnet (Extended Thinking)
Mistral Large 2 (Nov '24)
Mistral Small 3.1
DeepSeek R1
DeepSeek V3 0324 (Mar' 25)
Grok 3 Reasoning Beta
Grok 3
Nova Pro
Nova Micro
Command A
QwQ 32B
DeepSeek V3
Further details