Comparison Summary
Cerebras (3135 t/s) and
Groq (0.17s) and
DeepInfra (Turbo) (0.22s) have the lowest latency for gpt-oss-120B (high), followed by
DeepInfra ($0.08) and
DeepInfra ($0.04) and
DeepInfra ($0.19) and Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Measured by Output Speed (tokens per second)
Results on the GPQA Diamond benchmark, which tests scientific knowledge and reasoning. Each provider endpoint is evaluated in 16 independent runs.
Min (whisker bottom), 25th percentile (box lower edge), Median (line in box), 75th percentile (box upper edge), Max (whisker top).
Results on the AIME 2025 benchmark, which tests advanced mathematical problem-solving capabilities. Each provider endpoint is evaluated in 32 independent runs.
Min (whisker bottom), 25th percentile (box lower edge), Median (line in box), 75th percentile (box upper edge), Max (whisker top).
Results on the IFBench benchmark, which evaluates instruction-following capabilities. Each provider endpoint is evaluated in 8 independent runs.
Min (whisker bottom), 25th percentile (box lower edge), Median (line in box), 75th percentile (box upper edge), Max (whisker top).
Features | Model Intelligence | Price | Speed | Latency | End-to-End Response Time | |||||
|---|---|---|---|---|---|---|---|---|---|---|
![]() Cerebras | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.45 | 3,135 | 0.31 | 1.10 | 0.64 |
![]() Databricks | gpt-oss-120B (high) | 128k | Open | 61 | −52 | $0.26 | 206 | 0.67 | 12.83 | 9.73 |
Parasail | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.26 | 328 | 0.44 | 8.06 | 6.10 |
Cloudflare | gpt-oss-120B (high) | 128k | Open | 61 | −52 | $0.45 | 122 | 84.20 | 104.74 | 16.43 |
Novita | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.20 | 132 | 1.32 | 20.26 | 15.15 |
Nebius Base | gpt-oss-120B (high) Base | 128k | Open | 61 | −52 | $0.26 | 251 | 0.57 | 10.55 | 7.98 |
Baseten | gpt-oss-120B (high) | 128k | Open | 61 | −52 | $0.20 | 289 | 0.55 | 9.20 | 6.92 |
Microsoft Azure | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.26 | 336 | 0.40 | 7.85 | 5.96 |
![]() SambaNova | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.31 | 575 | 0.43 | 4.78 | 3.48 |
Eigen AI | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.20 | 774 | 1.08 | 4.31 | 2.58 |
Hyperbolic | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.30 | 404 | 0.63 | 6.82 | 4.95 |
Amazon Bedrock | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.26 | 213 | 0.65 | 12.40 | 9.41 |
Together.ai | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.26 | 783 | 0.25 | 3.45 | 2.56 |
Lightning AI | gpt-oss-120B (high) | 128k | Open | 61 | −52 | $0.17 | 790 | 0.72 | 3.89 | 2.53 |
![]() DeepInfra (Turbo) | gpt-oss-120B (high) (Turbo) | 131k | Open | 61 | −52 | $0.26 | 417 | 0.22 | 6.21 | 4.79 |
Google Vertex | gpt-oss-120B (high) Vertex | 131k | Open | 61 | −52 | $0.16 | 408 | 0.25 | 6.39 | 4.91 |
Fireworks | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.26 | 811 | 0.43 | 3.51 | 2.47 |
Snowflake | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.22 | 313 | 0.47 | 8.45 | 6.38 |
![]() Groq | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.26 | 468 | 0.17 | 5.51 | 4.28 |
Clarifai | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.16 | 654 | 0.25 | 4.07 | 3.06 |
![]() DeepInfra | gpt-oss-120B (high) | 131k | Open | 61 | −52 | $0.08 | 129 | 0.26 | 19.68 | 15.54 |
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Measured by Time (seconds) to First Token
Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
While models have their own context window, in cases this is limited by providers.