Z AI has launched a newer model, GLM-4.7-Flash, we suggest considering this model instead.
For more information, see Comparison of GLM-4.7-Flash to other models and API provider benchmarks for GLM-4.7-Flash.
GLM-4.5-Air API Provider Benchmarking & Analysis
Analysis of API providers for GLM-4.5-Air across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include DeepInfra, Together.ai (FP8), Nebius Base, SiliconFlow.
Fastest
Output speed
Total 4 providers
Lowest Latency
Time to first token
Total 4 providers
Lowest Price
Blended price (per 1M tokens)
Total 4 providers
GLM-4.5-Air is available through 4 API providers, each offering different performance characteristics and pricing. Below is a comparison of the key metrics across providers.
- For output speed, the top providers are Together.ai (FP8) (275.7 t/s), Nebius Base (127.9 t/s), DeepInfra (86.7 t/s). Speed varies significantly across providers, with a 821% difference between the fastest and slowest.
- For latency, DeepInfra (1.03s), Together.ai (FP8) (1.16s), Nebius Base (1.20s) offer the lowest time to first token.
- For pricing, SiliconFlow (0.32), DeepInfra (0.42), Together.ai (FP8) (0.42) offer the lowest blended prices per 1M tokens.
Update: Default performance benchmarking workload has updated to 10k input tokens to better reflect production use cases. You can still select different workloads above!
Pricing
Pricing: Input and Output Prices: GLM-4.5-Air Providers
Speed vs. Price: GLM-4.5-Air Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed: GLM-4.5-Air Providers
Latency vs. Output Speed: GLM-4.5-Air Providers
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token: GLM-4.5-Air Providers
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: GLM-4.5-Air Providers
API Features
Function (Tool) Calling & JSON Mode: GLM-4.5-Air Providers
Context Window: GLM-4.5-Air Providers
Summary Table of Key Comparison Metrics
Frequently Asked Questions
Common questions about GLM-4.5-Air providers
GLM-4.5-Air is available through 4 API providers: DeepInfra, Together.ai (FP8), Nebius Base, and SiliconFlow. Each provider offers different performance characteristics and pricing.
GLM-4.5-Air is currently available through 4 API providers that we benchmark and track.
The fastest providers for GLM-4.5-Air by output speed are Together.ai (FP8) (275.7 t/s), Nebius Base (127.9 t/s), and DeepInfra (86.7 t/s). Output speed measures how quickly tokens are generated after the model starts responding.
The providers with the lowest time to first token for GLM-4.5-Air are DeepInfra (1.03s), Together.ai (FP8) (1.16s), and Nebius Base (1.20s). Lower latency means faster initial response time.
The most affordable providers for GLM-4.5-Air by blended price are SiliconFlow ($0.32 per 1M tokens), DeepInfra ($0.42 per 1M tokens), and Together.ai (FP8) ($0.42 per 1M tokens). Blended price uses a 3:1 input to output token ratio.
The providers with the lowest input token pricing for GLM-4.5-Air are SiliconFlow ($0.14 per 1M input tokens), DeepInfra ($0.20 per 1M input tokens), and Together.ai (FP8) ($0.20 per 1M input tokens).
The providers with the lowest output token pricing for GLM-4.5-Air are SiliconFlow ($0.86 per 1M output tokens), DeepInfra ($1.10 per 1M output tokens), and Together.ai (FP8) ($1.10 per 1M output tokens).
Prices for GLM-4.5-Air vary up to 1.4x across providers. The most affordable is SiliconFlow at $0.32 per 1M tokens, while Nebius Base charges $0.45 per 1M tokens.
Output speed for GLM-4.5-Air varies significantly across providers. Together.ai (FP8) is the fastest at 275.7 t/s, which is 9.2x faster than SiliconFlow at 29.9 t/s.
1 of 4 providers support JSON mode for GLM-4.5-Air: Nebius Base.
3 of 4 providers support function calling for GLM-4.5-Air: DeepInfra, Together.ai (FP8), and Nebius Base.
The best provider for GLM-4.5-Air depends on your priorities: Together.ai (FP8) offers the highest output speed, DeepInfra has the lowest latency, and SiliconFlow provides the most competitive pricing.
When choosing a provider for GLM-4.5-Air, consider: output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and API features like JSON mode or function calling.
Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.
For information about GLM-4.5-Air's intelligence, capabilities, modalities, and how it compares to other models, see the model overview page. View model overview
DeepInfra