o4-mini (high): Intelligence, Performance & Price Analysis
Analysis of OpenAI's o4-mini (high) and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Comparison Summary
Intelligence:
o4-mini (high) is of higher quality compared to average, with a MMLU score of 0.832 and a Intelligence Index across evaluations of 70.
Price:o4-mini (high) is cheaper compared to average with a price of $1.93 per 1M Tokens (blended 3:1).
o4-mini (high) Input token price: $1.10, Output token price: $4.40 per 1M Tokens.
Speed:o4-mini (high) Input token price: $1.10, Output token price: $4.40 per 1M Tokens.
o4-mini (high) is faster compared to average, with a output speed of 134.8 tokens per second.
Latency:o4-mini (high) has a higher latency compared to average, taking 35.31s to receive the first token (TTFT).
Context Window:o4-mini (high) has a smaller context windows than average, with a context window of 200k tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
o4-mini (high) Model Details
Comparisons to o4-mini (high)
GPT-4.1
o3-mini (high)
GPT-4.1 mini
GPT-4o (March 2025, chatgpt-4o-latest)
Llama 4 Scout
Llama 4 Maverick
Gemini 2.0 Flash (Feb '25)
Gemma 3 27B Instruct
Gemini 2.5 Flash Preview
Gemini 2.5 Pro Preview (Mar' 25)
Claude 3.7 Sonnet (Extended Thinking)
Mistral Large 2 (Nov '24)
DeepSeek R1
DeepSeek V3 0324 (Mar' 25)
Grok 3 mini Reasoning (high)
Grok 3
Nova Pro
Llama 3.1 Nemotron Ultra 253B v1 (Reasoning)
Further details