Llama 3 8B: API Provider Benchmarking & Analysis
Meta has launched a newer model, Llama 3.1 8B, we suggest considering this model instead.
For more information, see Comparison of Llama 3.1 8B to other models and API provider benchmarks for Llama 3.1 8B.
Comparison Summary
Replicate (66 t/s) &
Deepinfra (44 t/s).Latency (TTFT):
Deepinfra (0.30s) and
Replicate (0.42s) &
Deepinfra ($0.04) and
Replicate ($0.10) &
Deepinfra ($0.03) and
Replicate ($0.05) &
Deepinfra ($0.06) offer the lowest output token prices for Llama 3 8B, followed by
Replicate ($0.25) & Highlights
Pricing
Speed vs. Price: Llama 3 8B Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Pricing: Input and Output Prices: Llama 3 8B Providers
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Latency vs. Output Speed: Llama 3 8B Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Speed
Measured by Output Speed (tokens per second)
Output Speed: Llama 3 8B Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Output Speed by Input Token Count (Context Length): Llama 3 8B Providers
Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Output Speed Variance: Llama 3 8B Providers
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Latency
Measured by Time (seconds) to First Token
Time to First Token: Llama 3 8B Providers
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Latency by Input Token Count (Context Length): Llama 3 8B Providers
Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Time to First Token Variance: Llama 3 8B Providers
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.

End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time vs. Price: Llama 3 8B Providers
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
End-to-End Response Time: Llama 3 8B Providers
Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
For fair comparison, the number of reasoning tokens is standardized across all providers for each model based on the model's representative query token counts.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
End-to-End Response Time by Input Token Count (Context Length): Llama 3 8B Providers
Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
API Features
Function (Tool) Calling & JSON Mode: Llama 3 8B Providers
Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.
Context Window: Llama 3 8B Providers
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
While models have their own context window, in cases this is limited by providers.
Summary Table of Key Comparison Metrics
Features | Model Intelligence | Price | Speed | Latency | End-to-End Response Time | |||||
|---|---|---|---|---|---|---|---|---|---|---|
Novita | Llama 3 8B | 8k | -- | -- | $0.04 | 73 | 0.79 | 7.60 | N/A | |
Amazon Bedrock | Llama 3 8B | 8k | -- | -- | $0.38 | 84 | 0.31 | 6.23 | N/A | |
![]() Replicate | Llama 3 8B | 8k | -- | -- | $0.10 | 66 | 0.42 | 7.99 | N/A | |
![]() Deepinfra | Llama 3 8B | 8k | -- | -- | $0.04 | 44 | 0.30 | 11.73 | N/A | |