Llama 3 8B: API Provider Benchmarking & Analysis
Analysis of API providers for Llama 3 Instruct 8B across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include Microsoft Azure, Amazon Bedrock, Groq, Together.ai, Deepinfra, Replicate, and Novita.
Note: Some providers are deprecating their Llama 3 endpoints in favour of Llama 3.1 endpoints
Meta has launched a newer model, Llama 3.1 8B. We suggest considering this model instead of Llama 3 8B. See the following pages for a comparison of Llama 3.1 8B to other models and Llama 3.1 8B API provider benchmarks.
Comparison Summary
Output Speed (tokens/s):
Groq (1346 t/s) and
Together.ai (186 t/s) are the fastest providers of Llama 3 8B, followed by
Llama 3 8B, Deepinfra,
Llama 3 8B, Amazon &
Llama 3 8B, Azure.Latency (TTFT):
Deepinfra (0.21s) and
Groq (0.29s) have the lowest latency for Llama 3 8B, followed by
Llama 3 8B, Amazon,
Llama 3 8B, Azure &
Llama 3 8B, Together.ai.Blended Price ($/M tokens):
Deepinfra ($0.04) and
Groq ($0.06) are the most cost-effective providers for Llama 3 8B, followed by
Llama 3 8B, Together.ai,
Llama 3 8B, Amazon &
Llama 3 8B, Azure.Input Token Price:
Deepinfra ($0.03) and
Groq ($0.05) offer the lowest input token prices for Llama 3 8B, followed by
Llama 3 8B, Together.ai,
Llama 3 8B, Amazon &
Llama 3 8B, Azure.Output Token Price:
Deepinfra ($0.06) and
Groq ($0.08) offer the lowest output token prices for Llama 3 8B, followed by
Llama 3 8B, Together.ai,
Llama 3 8B, Amazon &
Llama 3 8B, Azure.















Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Note: Long prompts not supported as a context window of at least 10k tokens is required
Summary Analysis
Output Speed vs. Price: Llama 3 8B Providers
Output Speed: Output Tokens per Second; Price: USD per 1M Tokens
Most attractive quadrant
Amazon
Azure
Deepinfra
Groq
Together.ai
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Notes: Llama 3 8B, Groq: 8k context
Latency vs. Output Speed: Llama 3 8B Providers
Latency: Seconds to First Token Received; Output Speed: Output Tokens per Second; 1,000 Input Tokens
Most attractive quadrant
Size represents Price (USD per M Tokens)
Amazon
Azure
Deepinfra
Groq
Together.ai
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Notes: Llama 3 8B, Groq: 8k context
Context Window: Llama 3 8B Providers
Context Window: Tokens Limit; Higher is better
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Variance between providers: While models have their own context window, in cases this is limited by providers.
Pricing
Pricing: Input and Output Prices: Llama 3 8B Providers
USD per 1M Tokens; Lower is better
Input price
Output price
Input Price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Output Price: Price per token generated by the model (received from the API), represented as USD per million Tokens.
Notes: Llama 3 8B, Groq: 8k context
Speed
Measured by Output Speed (tokens per second)
Output Speed: Llama 3 8B Providers
Output Tokens per Second; Higher is better
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Median: Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Notes: Llama 3 8B, Groq: 8k context
Output Speed Variance: Llama 3 8B Providers
Output Tokens per Second; Results by percentile; Higher is better; 1,000 Input Tokens
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Boxplot: Shows variance of measurements

Notes: Llama 3 8B, Groq: 8k context
Output Speed by Input Token Count (Context Length): Llama 3 8B Providers
Output Tokens per Second; Higher is better
100 input tokens
1k input tokens
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Median: Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Notes: Llama 3 8B, Groq: 8k context
Latency
Measured by Time (seconds) to First Token
Time to First Token: Llama 3 8B Providers
Seconds to First Token Received; Lower is better; 1,000 Input Tokens
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Median: Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Time to First Token Variance: Llama 3 8B Providers
Seconds to First Token Received; Results by percentile; Lower median is better; 1,000 Input Tokens
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Boxplot: Shows variance of measurements

Latency by Input Token Count (Context Length): Llama 3 8B Providers
Seconds to First Token Received; Lower is better
100 input tokens
1k input tokens
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Latency (Time to First Token): Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Median: Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time vs. Price: Llama 3 8B Providers
End-to-End Response Time: End-to-End Seconds to Output 500 Tokens; Price: USD per 1M Tokens
Most attractive quadrant
Amazon
Azure
Deepinfra
Groq
Together.ai
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
End-to-End Response Time: Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Median: Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
End-to-End Response Time: Llama 3 8B Providers
Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better; 1,000 Input Tokens
Input processing time
'Thinking' time (reasoning models)
Outputting time
End-to-End Response Time: Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
Standardized Reasoning Tokens: For fair comparison, the number of reasoning tokens is standardized across all providers for each model based on the model's representative query token counts.
Median: Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
End-to-End Response Time by Input Token Count (Context Length): Llama 3 8B Providers
Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
100 input tokens
1k input tokens
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
End-to-End Response Time: Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Median: Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Notes: Llama 3 8B, Groq: 8k context
API Features
Function (Tool) Calling & JSON Mode: Llama 3 8B Providers
Function (Tool) Calling: Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
JSON Mode: Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.
Summary Table of Key Comparison Metrics
Features | Model Intelligence | Price | Output tokens/s | Latency | End-to-End Response Time | |||
---|---|---|---|---|---|---|---|---|
![]() | Llama 3 8B | 8k | 21 | $0.38 | 103.7 | 0.30 | 5.12 | N/A |
![]() | Llama 3 8B | 8k | 21 | $0.38 | 73.8 | 0.37 | 7.15 | N/A |
Llama 3 8B | 8k | 21 | $0.04 | 114.3 | 0.21 | 4.59 | N/A | |
Llama 3 8B | 8k | 21 | $0.06 | 1,345.7 | 0.29 | 0.67 | N/A | |
Llama 3 8B | 8k | 21 | $0.20 | 186.3 | 0.47 | 3.15 | N/A |