gpt-oss-120B (high) API Provider Benchmarking & Analysis
Analysis of API providers for gpt-oss-120B (high) across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include Cerebras, Parasail, Databricks, Cloudflare, Nebius Base, Microsoft Azure, Baseten, SambaNova, Hyperbolic, Amazon Bedrock, Together.ai, Lightning AI, DeepInfra (Turbo), Google Vertex, Snowflake, Groq, Clarifai, DeepInfra, Novita, Fireworks, Scaleway, Eigen AI.
Fastest
Output speed
Total 22 providers
Lowest Latency
Time to first token
Total 22 providers
Lowest Price
Blended price (per 1M tokens)
Total 22 providers
gpt-oss-120B (high) is available through 22 API providers, each offering different performance characteristics and pricing. Below is a comparison of the key metrics across providers.
- For output speed, the top providers are Cerebras (3,034.2 t/s), Together.ai (912.9 t/s), Fireworks (781.6 t/s). Speed varies significantly across providers, with a 326% difference between the fastest and slowest.
- For latency, Baseten (0.12s), Groq (0.14s), Lightning AI (0.18s) offer the lowest time to first token.
- For pricing, DeepInfra (0.08), Novita (0.10), Google Vertex (0.16) offer the lowest blended prices per 1M tokens. Prices vary up to 2.3x across providers.
Pricing
Pricing: Input and Output Prices: gpt-oss-120B (high) Providers
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Speed vs. Price: gpt-oss-120B (high) Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Speed
Measured by Output Speed (tokens per second)
Output Speed: gpt-oss-120B (high) Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Latency vs. Output Speed: gpt-oss-120B (high) Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token: gpt-oss-120B (high) Providers
Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: gpt-oss-120B (high) Providers
Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
For fair comparison, the number of reasoning tokens is standardized across all providers for each model based on the model's representative query token counts.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
API Features
Function (Tool) Calling & JSON Mode: gpt-oss-120B (high) Providers
Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.
Context Window: gpt-oss-120B (high) Providers
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
While models have their own context window, in cases this is limited by providers.
Endpoint Evaluations
GPQAx16 Performance: gpt-oss-120B (high)
Results on the GPQA Diamond benchmark, which tests scientific knowledge and reasoning. Each provider endpoint is evaluated in 16 independent runs.
Min (whisker bottom), 25th percentile (box lower edge), Median (line in box), 75th percentile (box upper edge), Max (whisker top).
AIME25x32 Performance: gpt-oss-120B (high)
Results on the AIME 2025 benchmark, which tests advanced mathematical problem-solving capabilities. Each provider endpoint is evaluated in 32 independent runs.
Min (whisker bottom), 25th percentile (box lower edge), Median (line in box), 75th percentile (box upper edge), Max (whisker top).
IFBenchx8 Performance: gpt-oss-120B (high)
Results on the IFBench benchmark, which evaluates instruction-following capabilities. Each provider endpoint is evaluated in 8 independent runs.
Min (whisker bottom), 25th percentile (box lower edge), Median (line in box), 75th percentile (box upper edge), Max (whisker top).
Cerebras
Databricks
SambaNova
DeepInfra (Turbo)
Groq