Qwen3 Coder 480B A35B Instruct API Provider Benchmarking & Analysis
Analysis of API providers for Qwen3 Coder 480B A35B Instruct across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include DeepInfra (FP8), Hyperbolic (FP8), Baseten (FP8), Together.ai (FP8), Alibaba Cloud, Nebius, Amazon Bedrock, Google Vertex, DeepInfra (Turbo, FP4), Novita, Eigen AI.
Fastest
Output speed
Total 11 providers
Lowest Latency
Time to first token
Total 11 providers
Lowest Price
Blended price (per 1M tokens)
Total 11 providers
Qwen3 Coder 480B is available through 11 API providers, each offering different performance characteristics and pricing. Below is a comparison of the key metrics across providers.
- For output speed, the top providers are Google Vertex (164.3 t/s), Together.ai (FP8) (155.6 t/s), Baseten (FP8) (75.2 t/s). Speed varies significantly across providers, with a 157% difference between the fastest and slowest.
- For latency, DeepInfra (Turbo, FP4) (0.26s), Google Vertex (0.32s), DeepInfra (FP8) (0.37s) offer the lowest time to first token.
- For pricing, DeepInfra (Turbo, FP4) (0.51), Novita (0.55), Amazon (0.61) offer the lowest blended prices per 1M tokens.
- DeepInfra (Turbo, FP4) offers both the lowest latency and best pricing, making it attractive for cost-conscious applications. Google Vertex is the fastest option for throughput-intensive workloads.
Pricing
Pricing: Input and Output Prices: Qwen3 Coder 480B Providers
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Speed vs. Price: Qwen3 Coder 480B Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Speed
Measured by Output Speed (tokens per second)
Output Speed: Qwen3 Coder 480B Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Latency vs. Output Speed: Qwen3 Coder 480B Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Latency
Measured by Time (seconds) to First Token
Time to First Token: Qwen3 Coder 480B Providers
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: Qwen3 Coder 480B Providers
Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
For fair comparison, the number of reasoning tokens is standardized across all providers for each model based on the model's representative query token counts.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
API Features
Function (Tool) Calling & JSON Mode: Qwen3 Coder 480B Providers
Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.
Context Window: Qwen3 Coder 480B Providers
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
While models have their own context window, in cases this is limited by providers.
DeepInfra (FP8)