GLM-5 (Reasoning) API Provider Benchmarking & Analysis
Analysis of API providers for GLM-5 (Reasoning) across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include Novita FP8, SiliconFlow (FP8), DeepInfra FP8, GMI FP8, Fireworks, Parasail (FP8), Google, Together.ai (FP4).
Fastest
Output speed
Total 8 providers
Lowest Latency
Time to first token
Total 8 providers
Lowest Price
Blended price (per 1M tokens)
Total 8 providers
GLM-5 is available through 8 API providers, each offering different performance characteristics and pricing. Below is a comparison of the key metrics across providers.
- For output speed, the top providers are GMI FP8 (110.3 t/s), Google (98.4 t/s), Fireworks (96.5 t/s). Speed varies significantly across providers, with a 89% difference between the fastest and slowest.
- For latency, DeepInfra FP8 (0.86s), Together.ai (FP4) (1.19s), Google (1.38s) offer the lowest time to first token.
- For pricing, DeepInfra FP8 (1.24), Novita FP8 (1.55), SiliconFlow (FP8) (1.55) offer the lowest blended prices per 1M tokens.
- DeepInfra FP8 offers both the lowest latency and best pricing, making it attractive for cost-conscious applications. GMI FP8 is the fastest option for throughput-intensive workloads.
Update: Default performance benchmarking workload has updated to 10k input tokens to better reflect production use cases. You can still select different workloads above!
Pricing
Pricing: Input and Output Prices: GLM-5 Providers
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Speed vs. Price: GLM-5 Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Speed
Measured by Output Speed (tokens per second)
Output Speed: GLM-5 Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Latency vs. Output Speed: GLM-5 Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token: GLM-5 Providers
Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: GLM-5 Providers
Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
For fair comparison, the number of reasoning tokens is standardized across all providers for each model based on the model's representative query token counts.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
API Features
Function (Tool) Calling & JSON Mode: GLM-5 Providers
Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.
Context Window: GLM-5 Providers
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
While models have their own context window, in cases this is limited by providers.
Summary Table of Key Comparison Metrics
Frequently Asked Questions
Common questions about GLM-5 (Reasoning) providers
GLM-5 (Reasoning) is available through 8 API providers: Novita FP8, SiliconFlow (FP8), DeepInfra FP8, GMI FP8, Fireworks, Parasail (FP8), Google, and Together.ai (FP4). Each provider offers different performance characteristics and pricing.
GLM-5 (Reasoning) is currently available through 8 API providers that we benchmark and track.
The providers with the lowest time to first token for GLM-5 (Reasoning) are DeepInfra FP8 (0.86s), Together.ai (FP4) (1.19s), and Google (1.38s). Lower latency means faster initial response time.
The most affordable providers for GLM-5 (Reasoning) by blended price are DeepInfra FP8 ($1.24 per 1M tokens), Novita FP8 ($1.55 per 1M tokens), and SiliconFlow (FP8) ($1.55 per 1M tokens). Blended price uses a 3:1 input to output token ratio.
The providers with the lowest input token pricing for GLM-5 (Reasoning) are DeepInfra FP8 ($0.80 per 1M input tokens), Novita FP8 ($1.00 per 1M input tokens), and SiliconFlow (FP8) ($1.00 per 1M input tokens).
The providers with the lowest output token pricing for GLM-5 (Reasoning) are DeepInfra FP8 ($2.56 per 1M output tokens), Novita FP8 ($3.20 per 1M output tokens), and SiliconFlow (FP8) ($3.20 per 1M output tokens).
Prices for GLM-5 (Reasoning) vary up to 1.3x across providers. The most affordable is DeepInfra FP8 at $1.24 per 1M tokens, while Novita FP8 charges $1.55 per 1M tokens.
Output speed for GLM-5 (Reasoning) varies significantly across providers. GMI FP8 is the fastest at 110.3 t/s, which is 3.8x faster than Parasail (FP8) at 29.4 t/s.
7 of 8 providers support JSON mode for GLM-5 (Reasoning): Novita FP8, DeepInfra FP8, GMI FP8, Fireworks, Parasail (FP8), Google, and Together.ai (FP4).
All 8 providers of GLM-5 (Reasoning) support function calling (tool use).
The best provider for GLM-5 (Reasoning) depends on your priorities: GMI FP8 offers the highest output speed, DeepInfra FP8 has the lowest latency, and DeepInfra FP8 provides the most competitive pricing.
When choosing a provider for GLM-5 (Reasoning), consider: output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and API features like JSON mode or function calling.
Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.
For information about GLM-5 (Reasoning)'s intelligence, capabilities, modalities, and how it compares to other models, see the model overview page. View model overview →
DeepInfra FP8