Stay connected with us on X, Discord, and LinkedIn to stay up to date with future analysis
gpt-oss-120B (high) logo

gpt-oss-120B (high) API Provider Benchmarking & Analysis

Open weights model

Released August 2025

Analysis of API providers for gpt-oss-120B (high) across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include Cerebras, Parasail, Databricks, Cloudflare, Nebius Base, Microsoft Azure, Baseten, SambaNova, Hyperbolic, Amazon Bedrock, Together.ai, Lightning AI, DeepInfra (Turbo), Google Vertex, Snowflake, Groq, Clarifai, DeepInfra, Novita, Fireworks, Scaleway, Eigen AI.

Fastest

#1
CerebrasCerebras
3,034.2 t/s
#2
Together.aiTogether.ai
912.9 t/s
#3
FireworksFireworks
781.6 t/s
#4
SambaNovaSambaNova
742.6 t/s
#5
Lightning AILightning AI
711.7 t/s

Output speed

Total 22 providers

Lowest Latency

#1
BasetenBaseten
0.12 s
#2
GroqGroq
0.14 s
#3
Lightning AILightning AI
0.18 s
#4
DeepInfra (Turbo)DeepInfra (Turbo)
0.21 s
#5
DeepInfraDeepInfra
0.22 s

Time to first token

Total 22 providers

Lowest Price

#1
DeepInfraDeepInfra
$0.08
#2
NovitaNovita
$0.10
#3
Google VertexGoogle Vertex
$0.16
#4
ClarifaiClarifai
$0.16
#5
Lightning AILightning AI
$0.17

Blended price (per 1M tokens)

Total 22 providers

gpt-oss-120B (high) is available through 22 API providers, each offering different performance characteristics and pricing. Below is a comparison of the key metrics across providers.

  • For output speed, the top providers are Cerebras (3,034.2 t/s), Together.ai (912.9 t/s), Fireworks (781.6 t/s). Speed varies significantly across providers, with a 326% difference between the fastest and slowest.
  • For latency, Baseten (0.12s), Groq (0.14s), Lightning AI (0.18s) offer the lowest time to first token.
  • For pricing, DeepInfra (0.08), Novita (0.10), Google Vertex (0.16) offer the lowest blended prices per 1M tokens. Prices vary up to 2.3x across providers.
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:

Pricing: Input and Output Prices

USD per 1M Tokens; Lower is better
Input price
Output price

Price per token included in the request/message sent to the API, represented as USD per million Tokens.

Price per token generated by the model (received from the API), represented as USD per million Tokens.

Speed vs. Price

Output Speed: Output Tokens per Second; Price: USD per 1M Tokens
Most attractive quadrant
Amazon
Azure
Baseten
Cerebras
Clarifai
Cloudflare
Databricks
DeepInfra
DeepInfra (Turbo)
Eigen AI
Fireworks
Google Vertex
Groq
Hyperbolic
Lightning AI
Nebius Base
Novita
Parasail
SambaNova
Scaleway
Snowflake
Together.ai

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

Speed

Measured by Output Speed (tokens per second)

Output Speed

Output Tokens per Second; Higher is better; 1,000 Input Tokens

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

Latency vs. Output Speed

Latency: Seconds to First Token Received; Output Speed: Output Tokens per Second; 1,000 Input Tokens
Most attractive quadrant
Size represents Price (USD per M Tokens)
Amazon
Azure
Baseten
Cerebras
Clarifai
Cloudflare
Databricks
DeepInfra
DeepInfra (Turbo)
Eigen AI
Fireworks
Google Vertex
Groq
Hyperbolic
Lightning AI
Nebius Base
Novita
Parasail
SambaNova
Scaleway
Snowflake
Together.ai

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

Latency

Measured by Time (seconds) to First Token

Time to First Answer Token

Seconds to First Token Received; Lower is better; 1,000 Input Tokens
'Thinking' time (reasoning models, where applicable)
Input processing time

Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

End-to-End Response Time

Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed

End-to-End Response Time

Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better; 1,000 Input Tokens
'Thinking' time (reasoning models)
Input processing time
Outputting time

Seconds to receive a 500 token response. Key components:

  • Input time: Time to receive the first response token
  • Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
  • Answer time: Time to generate 500 output tokens, based on output speed

For fair comparison, the number of reasoning tokens is standardized across all providers for each model based on the model's representative query token counts.

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

API Features

Function (Tool) Calling & JSON Mode

ModelsFunction callingJSON Mode
Cerebras logoCerebras
Parasail logoParasail
Databricks logoDatabricks
Cloudflare logoCloudflare
Nebius Base logoNebius Base
Azure logoAzure
Baseten logoBaseten
SambaNova logoSambaNova
Hyperbolic logoHyperbolic
Amazon logoAmazon
Together.ai logoTogether.ai
Lightning AI logoLightning AI
DeepInfra (Turbo) logoDeepInfra (Turbo)
Google Vertex logoGoogle Vertex
Snowflake logoSnowflake
Groq logoGroq
Clarifai logoClarifai
DeepInfra logoDeepInfra
Novita logoNovita
Fireworks logoFireworks
Scaleway logoScaleway
Eigen AI logoEigen AI

Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.

Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.

Context Window

Context Window: Tokens Limit; Higher is better

Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

While models have their own context window, in cases this is limited by providers.

Endpoint Evaluations

GPQAx16 Performance: gpt-oss-120B (high)

GPQA Diamond N=16 Runs: Minimum, 25th Percentile, Median, 75th Percentile, Maximum (Higher is Better)
Median; other points represent Min, 25th, 75th percentiles and Max respectively

Results on the GPQA Diamond benchmark, which tests scientific knowledge and reasoning. Each provider endpoint is evaluated in 16 independent runs.

Min (whisker bottom), 25th percentile (box lower edge), Median (line in box), 75th percentile (box upper edge), Max (whisker top).

AIME25x32 Performance: gpt-oss-120B (high)

AIME 2025 N=32 Runs: Minimum, 25th Percentile, Median, 75th Percentile, Maximum (Higher is Better)
Median; other points represent Min, 25th, 75th percentiles and Max respectively

Results on the AIME 2025 benchmark, which tests advanced mathematical problem-solving capabilities. Each provider endpoint is evaluated in 32 independent runs.

Min (whisker bottom), 25th percentile (box lower edge), Median (line in box), 75th percentile (box upper edge), Max (whisker top).

IFBenchx8 Performance: gpt-oss-120B (high)

IFBench N=8 Runs: Minimum, 25th Percentile, Median, 75th Percentile, Maximum (Higher is Better)
Median; other points represent Min, 25th, 75th percentiles and Max respectively

Results on the IFBench benchmark, which evaluates instruction-following capabilities. Each provider endpoint is evaluated in 8 independent runs.

Min (whisker bottom), 25th percentile (box lower edge), Median (line in box), 75th percentile (box upper edge), Max (whisker top).

Summary Table of Key Comparison Metrics