OpenAI has launched a newer model, o3-pro, we suggest considering this model instead.
For more information, see Comparison of o3-pro to other models and API provider benchmarks for o3-pro.
o1-pro API Provider Benchmarking & Analysis
Analysis of API providers for o1-pro across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include .
Fastest
Output speed
Total 0 providers
Lowest Latency
Time to first token
Total 0 providers
Lowest Price
Blended price (per 1M tokens)
Total 0 providers
No API providers are currently available for o1-pro.
Benchmarks of providers are not available for this model.
Please see the models page for o1-pro for details of the model and its intelligence compared to other models.
Update: Default performance benchmarking workload has updated to 10k input tokens to better reflect production use cases. You can still select different workloads above!
Pricing
Pricing: Input and Output Prices: o1-pro Providers
Speed vs. Price: o1-pro Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed: o1-pro Providers
Latency vs. Output Speed: o1-pro Providers
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token: o1-pro Providers
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: o1-pro Providers
API Features
Function (Tool) Calling & JSON Mode: o1-pro Providers
No comments available
Please check back later or adjust your filters.
Context Window: o1-pro Providers
Summary Table of Key Comparison Metrics
Frequently Asked Questions
Common questions about o1-pro providers
o1-pro is not currently available through any API providers we benchmark. Check back later for availability updates.