OpenAI has launched a newer model, GPT-5 mini (medium), we suggest considering this model instead.
For more information, see Comparison of GPT-5 mini (medium) to other models and API provider benchmarks for GPT-5 mini (medium).
o3-mini API Provider Benchmarking & Analysis
Analysis of API providers for o3-mini across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include Microsoft Azure, OpenAI.
Fastest
Output speed
Total 2 providers
Lowest Latency
Time to first token
Total 2 providers
Lowest Price
Blended price (per 1M tokens)
Total 2 providers
o3-mini is available through 2 API providers, each offering different performance characteristics and pricing. Below is a comparison of the key metrics across providers.
- For output speed, the top providers are Azure (152.6 t/s), OpenAI (132.1 t/s).
- For latency, Azure (16.11s), OpenAI (18.55s) offer the lowest time to first token.
- For pricing, Azure (1.93), OpenAI (1.93) offer the lowest blended prices per 1M tokens.
- Azure stands out as the overall leader, ranking first across all three categories: speed, latency, and pricing.
Pricing
Pricing: Input and Output Prices: o3-mini Providers
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Speed vs. Price: o3-mini Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Speed
Measured by Output Speed (tokens per second)
Output Speed: o3-mini Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Latency vs. Output Speed: o3-mini Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token: o3-mini Providers
Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: o3-mini Providers
Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
For fair comparison, the number of reasoning tokens is standardized across all providers for each model based on the model's representative query token counts.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
API Features
Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.
Context Window: o3-mini Providers
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
While models have their own context window, in cases this is limited by providers.
Summary Table of Key Comparison Metrics
FAQ
Common questions about o3-mini providers
o3-mini is currently available through 2 API providers that we benchmark and track.
All 2 providers of o3-mini support JSON mode for structured output.
All 2 providers of o3-mini support function calling (tool use).
Azure leads across all key metrics for o3-mini, offering the fastest speed, lowest latency, and most competitive pricing.
When choosing a provider for o3-mini, consider: output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and API features like JSON mode or function calling.
Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.
For information about o3-mini's intelligence, capabilities, modalities, and how it compares to other models, see the model overview page. View model overview →