Stay connected with us on X, Discord, and LinkedIn to stay up to date with future analysis
logo

Hyperbolic: Models Intelligence, Performance & Price

Analysis of Hyperbolic's models across key metrics including quality, price, output speed, latency, context window & more. This analysis is intended to support you in choosing the best model provided by Hyperbolic for your use-case. For more details including relating to our methodology, see our FAQs.
Link:

Hyperbolic Model Comparison Summary

Intelligence:gpt-oss-120B (high) logo gpt-oss-120B (high) and Qwen3 235B A22B 2507 (FP8) logo Qwen3 235B A22B 2507 (FP8) are the highest intelligence models offered by Hyperbolic, followed by DeepSeek R1 0528 logo DeepSeek R1 0528, Qwen3 Next 80B A3B logo Qwen3 Next 80B A3B & Qwen3 235B 2507 logo Qwen3 235B 2507.Output Speed (tokens/s):gpt-oss-120B (low) logo gpt-oss-120B (low) (426 t/s) and gpt-oss-120B (high) logo gpt-oss-120B (high) (404 t/s) are the fastest models offered by Hyperbolic, followed by Qwen3 Next 80B A3B logo Qwen3 Next 80B A3B, Qwen3 Next 80B A3B logo Qwen3 Next 80B A3B & Llama 3.1 8B logo Llama 3.1 8B.Latency (seconds):Qwen3 Next 80B A3B logo Qwen3 Next 80B A3B (0.49s) and  Qwen2.5 Coder 32B logo Qwen2.5 Coder 32B (0.49s) are the lowest latency models offered by Hyperbolic, followed by gpt-oss-20B (low) logo gpt-oss-20B (low), Llama 3.1 8B logo Llama 3.1 8B & gpt-oss-20B (high) logo gpt-oss-20B (high).Blended Price ($/M tokens):gpt-oss-20B (high) logo gpt-oss-20B (high) ($0.10) and gpt-oss-20B (low) logo gpt-oss-20B (low) ($0.10) are the cheapest models offered by Hyperbolic, followed by Llama 3.1 8B logo Llama 3.1 8B, Llama 3.2 3B logo Llama 3.2 3B & Qwen2.5 Coder 32B logo Qwen2.5 Coder 32B.Context Window Size:Qwen3 Coder 480B (FP8) logo Qwen3 Coder 480B (FP8) (262k) and Qwen3 Next 80B A3B logo Qwen3 Next 80B A3B (262k) are the largest context window models offered by Hyperbolic, followed by Qwen3 235B 2507 logo Qwen3 235B 2507, Qwen3 Next 80B A3B logo Qwen3 Next 80B A3B & DeepSeek R1 0528 logo DeepSeek R1 0528.

Highlights

Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better

Intelligence Evaluations

Artificial Analysis Intelligence Index

Artificial Analysis Intelligence Index; Higher is better

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Intelligence Evaluations

Intelligence evaluations measured independently by Artificial Analysis; Higher is better
Results claimed by AI Lab (not yet independently verified)
GDPval-AA ((ELO-500)/2000)
Terminal-Bench Hard (Agentic Coding & Terminal Use)
𝜏²-Bench Telecom (Agentic Tool Use)
AA-LCR (Long Context Reasoning)
AA-Omniscience Accuracy (Knowledge)
AA-Omniscience Non-Hallucination Rate (1 - Hallucination Rate)
No data available
Humanity's Last Exam (Reasoning & Knowledge)
GPQA Diamond (Scientific Reasoning)
SciCode (Coding)
IFBench (Instruction Following)
CritPt (Physics Reasoning)
MMMU Pro (Visual Reasoning)
No data available

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Intelligence vs. Price

Artificial Analysis Intelligence Index; Price: USD per 1M Tokens
Most attractive quadrant
Hyperbolic
Hyperbolic (FP8)

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Context Window

Context Window

Context Window: Tokens Limit; Higher is better

Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

JSON Mode & Function Calling

Function (Tool) Calling & JSON Mode

ModelsFunction callingJSON Mode
gpt-oss-120B (low), Hyperbolic logogpt-oss-120B (low), Hyperbolic
gpt-oss-20B (high), Hyperbolic logogpt-oss-20B (high), Hyperbolic
gpt-oss-20B (low), Hyperbolic logogpt-oss-20B (low), Hyperbolic
gpt-oss-120B (high), Hyperbolic logogpt-oss-120B (high), Hyperbolic
Llama 3.3 70B, Hyperbolic logoLlama 3.3 70B, Hyperbolic
Llama 3.1 405B, Hyperbolic logoLlama 3.1 405B, Hyperbolic
DeepSeek R1 0528, Hyperbolic logoDeepSeek R1 0528, Hyperbolic
Qwen3 Coder 480B (FP8), Hyperbolic logoQwen3 Coder 480B (FP8), Hyperbolic
Qwen3 Next 80B A3B, Hyperbolic logoQwen3 Next 80B A3B, Hyperbolic
Qwen3 235B 2507, Hyperbolic logoQwen3 235B 2507, Hyperbolic
Qwen3 235B A22B 2507 (FP8), Hyperbolic logoQwen3 235B A22B 2507 (FP8), Hyperbolic
Qwen3 Next 80B A3B, Hyperbolic logoQwen3 Next 80B A3B, Hyperbolic

Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.

Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.

Intelligence vs. Price

Artificial Analysis Intelligence Index; Price: USD per 1M Tokens
Most attractive quadrant
Hyperbolic
Hyperbolic (FP8)

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Performance Summary

Output Speed vs. Price

Output Speed: Output Tokens per Second; Price: USD per 1M Tokens; 1,000 Input Tokens
Most attractive quadrant
Hyperbolic
Hyperbolic (FP8)

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Speed

Measured by Output Speed (tokens per second)

Output Speed

Output Tokens per Second; Higher is better; 1,000 Input Tokens

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Latency

Measured by Time (seconds) to First Token

Time to First Answer Token

Seconds to First Token Received; Lower is better
'Thinking' time (reasoning models, where applicable)
Input processing time

Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.

Seconds to receive a 500 token response. Key components:

  • Input time: Time to receive the first response token
  • Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
  • Answer time: Time to generate 500 output tokens, based on output speed

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

End-to-End Response Time

Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed

End-to-End Response Time vs. Price

End-to-End Response Time: End-to-End Seconds to Output 500 Tokens; Price: USD per 1M Tokens
Most attractive quadrant
Hyperbolic
Hyperbolic (FP8)

Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Key definitions

Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Price per token generated by the model (received from the API), represented as USD per million Tokens.

Price per token included in the request/message sent to the API, represented as USD per million Tokens.

Metrics are 'live' and are based on the past 72 hours of measurements, measurements are taken 8 times a day for single requests and 2 times per day for parallel requests.