OpenAI has launched a newer model, GPT-5 mini (high), we suggest considering this model instead.
For more information, see Comparison of GPT-5 mini (high) to other models and API provider benchmarks for GPT-5 mini (high).
o4-mini (high) API Provider Benchmarking & Analysis
Analysis of API providers for o4-mini (high) across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include Microsoft Azure, OpenAI.
Fastest
Output speed
Total 2 providers
Lowest Latency
Time to first token
Total 2 providers
Lowest Price
Blended price (per 1M tokens)
Total 2 providers
o4-mini (high) is available through 2 API providers, each offering different performance characteristics and pricing. Below is a comparison of the key metrics across providers.
- For output speed, the top providers are Azure (155.2 t/s), OpenAI (150.2 t/s).
- For latency, Azure (20.73s), OpenAI (24.66s) offer the lowest time to first token.
- For pricing, Azure (1.93), OpenAI (1.93) offer the lowest blended prices per 1M tokens.
- Azure stands out as the overall leader, ranking first across all three categories: speed, latency, and pricing.
Update: Default performance benchmarking workload has updated to 10k input tokens to better reflect production use cases. You can still select different workloads above!
Pricing
Pricing: Input and Output Prices: o4-mini (high) Providers
Speed vs. Price: o4-mini (high) Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed: o4-mini (high) Providers
Latency vs. Output Speed: o4-mini (high) Providers
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token: o4-mini (high) Providers
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: o4-mini (high) Providers
API Features
Function (Tool) Calling & JSON Mode: o4-mini (high) Providers
Context Window: o4-mini (high) Providers
Summary Table of Key Comparison Metrics
Frequently Asked Questions
Common questions about o4-mini (high) providers
o4-mini (high) is currently available through 2 API providers that we benchmark and track.
1 of 2 providers support JSON mode for o4-mini (high): OpenAI.
All 2 providers of o4-mini (high) support function calling (tool use).
Azure leads across all key metrics for o4-mini (high), offering the fastest speed, lowest latency, and most competitive pricing.
When choosing a provider for o4-mini (high), consider: output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and API features like JSON mode or function calling.
Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.
For information about o4-mini (high)'s intelligence, capabilities, modalities, and how it compares to other models, see the model overview page. View model overview