DeepInfra: Models Intelligence, Performance & Price
DeepInfra Model Comparison Summary
Intelligence Evaluations
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Intelligence Evaluations
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Intelligence vs. Price
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Context Window
Context Window
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
JSON Mode & Function Calling
Function (Tool) Calling & JSON Mode
| Models | Function calling | JSON Mode |
|---|---|---|
GLM-5 FP8, DeepInfra | ||
Kimi K2.5 Turbo, DeepInfra | ||
Kimi K2.5, DeepInfra | ||
GLM-4.7 (FP4), DeepInfra | ||
MiniMax-M2.5 (FP8), DeepInfra | ||
Kimi K2 Thinking, DeepInfra | ||
GLM-5 (FP8), DeepInfra | ||
MiniMax-M2.1 (FP8), DeepInfra | ||
NVIDIA Nemotron 3 Super, DeepInfra | ||
GLM-4.7 (FP4), DeepInfra | ||
gpt-oss-120B (high) (Turbo), DeepInfra | ||
gpt-oss-120B (high), DeepInfra |
Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.
Pricing
Intelligence vs. Price
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Performance Summary
Output Speed vs. Price
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Speed
Measured by Output Speed (tokens per second)
Output Speed
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token
Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time vs. Price
Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Key definitions
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Metrics are 'live' and are based on the past 72 hours of measurements, measurements are taken 8 times a day for single requests and 2 times per day for parallel requests.
Frequently Asked Questions
Common questions about DeepInfra
DeepInfra offers 77 models that we track: GLM-5, Kimi K2.5, Kimi K2.5, GLM-4.7, MiniMax-M2.5, Kimi K2 Thinking, GLM-5, MiniMax-M2.1, NVIDIA Nemotron 3 Super, GLM-4.7, gpt-oss-120B (high), gpt-oss-120B (high), GLM-4.6, DeepSeek V3.2, Kimi K2 0905, GLM-4.7-Flash, Qwen3 235B A22B 2507, DeepSeek V3.1 Terminus, DeepSeek V3.2 Exp, DeepSeek V3.1, DeepSeek R1 0528, GLM-4.5, Kimi K2, Qwen3 235B 2507, Qwen3 Coder 480B, Qwen3 Coder 480B, gpt-oss-20B (high), NVIDIA Nemotron 3 Nano, GLM-4.6V, GLM-4.5-Air, DeepSeek V3 0324, Qwen3 VL 235B A22B, Qwen3 Next 80B A3B, DeepSeek R1 (Jan), DeepSeek R1 (Jan), Llama Nemotron Super 49B v1.5, Llama 4 Maverick, Devstral Small (May), Qwen3 235B, Qwen3 32B, DeepSeek V3 (Dec), Qwen3 14B, Qwen3 VL 30B A3B, Qwen3 30B, Devstral Small, Mistral Small 3.2, NVIDIA Nemotron Nano 12B v2 VL, NVIDIA Nemotron Nano 9B V2, Llama Nemotron Super 49B v1.5, Llama 3.3 70B, Llama 4 Scout, Llama 3.1 Nemotron 70B, NVIDIA Nemotron 3 Nano, Qwen3 14B, Qwen3 30B, Llama 3.1 70B, Llama 3.1 70B, Olmo 3.1 32B Instruct, Llama 3.1 8B, Llama 3.1 8B, Phi-4, Gemma 3 27B, NVIDIA Nemotron Nano 12B v2 VL, Gemma 3 12B, Llama 3.2 11B (Vision), Gemma 3 4B, Llama 3.2 90B (Vision), DeepSeek R1 Distill Llama 70B, NVIDIA Nemotron Nano 9B V2, Llama 3.2 3B, Mistral Small 3, Mixtral 8x7B, DeepSeek R1 Distill Qwen 32B, Hermes 3 - Llama-3.1 70B, Qwen2.5 72B, Qwen3 32B, and QwQ 32B-Preview.
The most intelligent model available on DeepInfra is GLM-5 with an Intelligence Index score of 50.
The fastest model on DeepInfra by output speed is NVIDIA Nemotron 3 Super at 471.3 tokens per second.
The model with the lowest time to first token on DeepInfra is gpt-oss-120B (high) at 0.37s. Lower latency means faster initial response time.
The most affordable model on DeepInfra by blended price is Llama 3.2 3B at $0.02 per 1M tokens (3:1 input to output ratio).
Prices on DeepInfra vary up to 75x across models, from $0.02 per 1M tokens for Llama 3.2 3B to $1.50 per 1M tokens for DeepSeek R1 (Jan).
Yes, DeepInfra offers an OpenAI-compatible API, making it easy to switch from OpenAI or use existing OpenAI SDK integrations.
49 of 77 models on DeepInfra support JSON mode for structured output.
65 of 77 models on DeepInfra support function calling (tool use).
Yes, DeepInfra offers 30 reasoning models: GLM-5, Kimi K2.5, Kimi K2.5, GLM-4.7, MiniMax-M2.5, Kimi K2 Thinking, MiniMax-M2.1, NVIDIA Nemotron 3 Super, gpt-oss-120B (high), gpt-oss-120B (high), GLM-4.6, GLM-4.7-Flash, Qwen3 235B A22B 2507, DeepSeek R1 0528, GLM-4.5, gpt-oss-20B (high), NVIDIA Nemotron 3 Nano, GLM-4.6V, GLM-4.5-Air, DeepSeek R1 (Jan), DeepSeek R1 (Jan), Llama Nemotron Super 49B v1.5, Qwen3 32B, Qwen3 14B, Qwen3 30B, NVIDIA Nemotron Nano 12B v2 VL, NVIDIA Nemotron Nano 9B V2, DeepSeek R1 Distill Llama 70B, DeepSeek R1 Distill Qwen 32B, and QwQ 32B-Preview. Reasoning models use extended thinking to work through complex problems before providing an answer.
Yes, all 77 models on DeepInfra are open weight models.
Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.
When choosing a model on DeepInfra, consider: intelligence (for quality-sensitive tasks), output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and features like context window size, JSON mode, or function calling support.
GLM-5 FP8, DeepInfra