logo

Llama 3 Instruct (8B): API Provider Benchmarking & Analysis

Analysis of API providers for Llama 3 Instruct (8B) across performance metrics including latency (time to first token), throughput (tokens per second), price and others. API providers benchmarked include Microsoft Azure, Amazon Bedrock, Groq, Together.ai, Perplexity, Fireworks, Deepinfra, Replicate, and OctoAI.
For comparison of Llama 3 (8B) to other models, see
Owner:
Meta
License:
Open
Context window:
8k
Link:

Comparison Summary

Throughput (tokens/s):Groq logo Groq (839 t/s) and  Fireworks logo Fireworks (192 t/s) are the fastest providers of Llama 3 (8B), followed by  Perplexity logo Perplexity,  Together.ai logo Together.ai &  Deepinfra logo Deepinfra.Latency (TTFT):Deepinfra logo Deepinfra (0.17s) and  Replicate logo Replicate (0.20s) have the lowest latency for Llama 3 (8B), followed by  Perplexity logo Perplexity,  Groq logo Groq &  OctoAI logo OctoAI.Blended Price ($/M tokens):Groq logo Groq ($0.06) and  Deepinfra logo Deepinfra ($0.08) are the most cost-effective providers for Llama 3 (8B), followed by  Replicate logo Replicate,  OctoAI logo OctoAI &  Together.ai logo Together.ai.Input Token Price:Groq logo Groq ($0.05) and  Replicate logo Replicate ($0.05) offer the lowest input token prices for Llama 3 (8B), followed by  Deepinfra logo Deepinfra,  OctoAI logo OctoAI &  Together.ai logo Together.ai.Output Token Price:Deepinfra logo Deepinfra ($0.08) and  Groq logo Groq ($0.10) offer the lowest output token prices for Llama 3 (8B), followed by  Together.ai logo Together.ai,  Perplexity logo Perplexity &  Fireworks logo Fireworks.

Highlights

Quality
Quality Index; Higher is better
Speed
Throughput in Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries: (Beta)
Prompt Length:
Note: Long prompts not supported as a context window of at least 10k tokens is required

Summary analysis

Throughput vs. Price

Throughput: Tokens per Second, Price: USD per 1M Tokens
Most attractive quadrant
Microsoft Azure
Amazon Bedrock
Groq
Together.ai
Perplexity
Fireworks
Deepinfra
Replicate
OctoAI
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 14 days.

Latency vs. Throughput

Latency: Seconds to First Tokens Chunk Received, Throughput: Tokens per Second
Most attractive quadrant
Size represents Price (USD per M Tokens)
Microsoft Azure
Amazon Bedrock
Groq
Together.ai
Perplexity
Fireworks
Deepinfra
Replicate
OctoAI
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Latency: Time to first token of tokens received, in seconds, after API request sent.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 14 days.

Pricing: Input and Output Prices

USD per 1M Tokens; Lower is better
Input price
Output price
Input price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Output price: Price per token generated by the model (received from the API), represented as USD per million Tokens.

Speed

Measured by Throughput (tokens per second)

Throughput

Output Tokens per Second; Higher is better
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Median: Figures represent median (P50) measurement over the past 14 days.

Throughput Variance

Output Tokens per Second; Results by percentile; Higher median is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Boxplot: Shows variance of measurements
Picture of the author

Throughput, Over Time

Output Tokens per Second; Higher is better
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.

Latency

Measured by Time (seconds) to First Token

Latency

Seconds to First Tokens Chunk Received; Lower is better
Latency: Time to first token of tokens received, in seconds, after API request sent.
Median: Figures represent median (P50) measurement over the past 14 days.

Latency Variance

Seconds to First Tokens Chunk Received; Results by percentile; Lower median is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Latency: Time to first token of tokens received, in seconds, after API request sent.
Boxplot: Shows variance of measurements
Picture of the author

Latency, Over Time

Seconds to First Tokens Chunk Received; Lower is better
Latency: Time to first token of tokens received, in seconds, after API request sent.
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.

Total Response Time

Time to receive 100 tokens output, calculated by latency and throughput metrics

Total Response Time vs. Price

Total Response Time: Seconds to Output 100 Tokens, Price: USD per 1M Tokens
Most attractive quadrant
Microsoft Azure
Amazon Bedrock
Groq
Together.ai
Perplexity
Fireworks
Deepinfra
Replicate
OctoAI
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Total Response Time: Time to receive a 100 token response. Estimated based on Latency (time to receive first chunk) and Throughput (tokens per second).
Median: Figures represent median (P50) measurement over the past 14 days.

Total Response Time

Seconds to Output 100 Tokens; Lower is better
Total Response Time: Time to receive a 100 token response. Estimated based on Latency (time to receive first chunk) and Throughput (tokens per second).
Median: Figures represent median (P50) measurement over the past 14 days.

Total Response Time, Over Time

Seconds to Output 100 Tokens; Lower is better
Total Response Time: Time to receive a 100 token response. Estimated based on Latency (time to receive first chunk) and Throughput (tokens per second).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.

Summary Table of Key Comparison Metrics

Context
Model Quality
Price
Throughput
Latency
Microsoft Azure logo
8k
65
$0.55
33.2
1.49
Amazon Bedrock logo
8k
65
$0.45
80.1
0.30
Groq logo
8k
65
$0.06
838.8
0.24
Together.ai logo
8k
65
$0.20
150.5
0.43
Perplexity logo
8k
65
$0.20
162.0
0.20
Fireworks logo
8k
65
$0.20
192.4
0.29
Deepinfra logo
8k
65
$0.08
114.6
0.17
Replicate logo
8k
65
$0.10
73.5
0.20
OctoAI logo
8k
65
$0.14
112.4
0.29