logo

Llama 3.1 70B: API Provider Benchmarking & Analysis

Analysis of API providers for Llama 3.1 Instruct 70B across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include Microsoft Azure, Hyperbolic, Amazon Bedrock, Groq, Together.ai, Perplexity, Google, Fireworks, Cerebras, Lepton AI, Deepinfra, Databricks, SambaNova, and OctoAI.
For comparison of Llama 3.1 70B to other models, see
Owner:
Meta
License:
Open
Context window:
128k
Link:

Comparison Summary

Output Speed (tokens/s):Llama 3.1 70B, Cerebras logo Cerebras (589 t/s) and  Llama 3.1 70B, SambaNova logo SambaNova (444 t/s) are the fastest providers of Llama 3.1 70B, followed by  Llama 3.1 70B, Groq logo Llama 3.1 70B, Groq,  Llama 3.1 70B, Fireworks logo Llama 3.1 70B, Fireworks &  Llama 3.1 70B Vertex, Google logo Llama 3.1 70B Vertex, Google.Latency (TTFT):Llama 3.1 70B, OctoAI logo OctoAI (0.31s) and  Llama 3.1 70B, Perplexity logo Perplexity (0.34s) have the lowest latency for Llama 3.1 70B, followed by  Llama 3.1 70B, Deepinfra logo Llama 3.1 70B, Deepinfra,  Llama 3.1 70B, Cerebras logo Llama 3.1 70B, Cerebras &  Llama 3.1 70B, Fireworks logo Llama 3.1 70B, Fireworks.Blended Price ($/M tokens):Llama 3.1 70B Vertex, Google logo Google Vertex ($0.00) and  Llama 3.1 70B, Deepinfra logo Deepinfra ($0.36) are the most cost-effective providers for Llama 3.1 70B, followed by  Llama 3.1 70B, Hyperbolic logo Llama 3.1 70B, Hyperbolic,  Llama 3.1 70B, Cerebras logo Llama 3.1 70B, Cerebras &  Llama 3.1 70B, Groq logo Llama 3.1 70B, Groq.Input Token Price:Llama 3.1 70B Vertex, Google logo Google Vertex ($0.00) and  Llama 3.1 70B, Deepinfra logo Deepinfra ($0.35) offer the lowest input token prices for Llama 3.1 70B, followed by  Llama 3.1 70B, Hyperbolic logo Llama 3.1 70B, Hyperbolic,  Llama 3.1 70B, Groq logo Llama 3.1 70B, Groq &  Llama 3.1 70B, Cerebras logo Llama 3.1 70B, Cerebras.Output Token Price:Llama 3.1 70B Vertex, Google logo Google Vertex ($0.00) and  Llama 3.1 70B, Hyperbolic logo Hyperbolic ($0.40) offer the lowest output token prices for Llama 3.1 70B, followed by  Llama 3.1 70B, Deepinfra logo Llama 3.1 70B, Deepinfra,  Llama 3.1 70B, Cerebras logo Llama 3.1 70B, Cerebras &  Llama 3.1 70B, Groq logo Llama 3.1 70B, Groq.

Highlights

Quality
Artificial Analysis Quality Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:

Quality & Capabilities

Quality Evaluations (Preliminary Results)

Evaluation results measured independently by Artificial Analysis; Higher is better
Artificial Analysis Quality Index
Reasoning & Knowledge (MMLU)
Quantitative Reasoning (MATH)
Coding (HumanEval)
Artificial Analysis Quality Index: Represents the the average of each provider's results across evaluations.

Context Window

Context Window: Tokens Limit; Higher is better
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Variance between providers: While models have their own context window, in cases this is limited by providers.

Summary Analysis

Output Speed vs. Price

Output Speed: Output Tokens per Second; Price: Price: USD per 1M Tokens
Most attractive quadrant
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 14 days or otherwise to reflect sustained changes in performance.
Notes: Llama 3.1 70B, Cerebras: 8k context, Llama 3.1 70B, SambaNova: 8k context

Latency vs. Output Speed

Latency: Seconds to First Tokens Chunk Received; Output Speed: Output Tokens per Second
Most attractive quadrant
Size represents Price (USD per M Tokens)
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 14 days or otherwise to reflect sustained changes in performance.
Notes: Llama 3.1 70B, Cerebras: 8k context, Llama 3.1 70B, SambaNova: 8k context

Pricing: Input and Output Prices

USD per 1M Tokens; Lower is better
Input price
Output price
Input price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Output price: Price per token generated by the model (received from the API), represented as USD per million Tokens.
Notes: Llama 3.1 70B, Cerebras: 8k context, Llama 3.1 70B, SambaNova: 8k context

Pricing: Cached Input Tokens

ModelsCache Pricing Notes
Google Vertex logoGoogle Vertex
  • Context caching available for Gemini 1.5 Flash/Pro, which allows for a discounted rate on cache hits
  • The cache has a default TTL (Time to Live) of 1 hour, which can be modified through the API
Pricing Cached Input Tokens: Some providers offer a caching layer for input tokens, which can help reduce API usage costs.

Speed

Measured by Output Speed (tokens per second)

Output Speed

Output Tokens per Second; Higher is better
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Median: Figures represent median (P50) measurement over the past 14 days or otherwise to reflect sustained changes in performance.
Notes: Llama 3.1 70B, Cerebras: 8k context, Llama 3.1 70B, SambaNova: 8k context

Output Speed Variance

Output Tokens per Second; Results by percentile; Higher is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Boxplot: Shows variance of measurements
Picture of the author
Notes: Llama 3.1 70B, Cerebras: 8k context, Llama 3.1 70B, SambaNova: 8k context

Output Speed, Over Time

Output Tokens per Second; Higher is better
Smaller, emerging providers offer high output speed, though precise speeds delivered vary day-to-day.
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Notes: Llama 3.1 70B, Cerebras: 8k context, Llama 3.1 70B, SambaNova: 8k context

Output Speed by Input Token Count (Context Length)

Output Tokens per Second; Higher is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Median: Figures represent median (P50) measurement over the past 14 days or otherwise to reflect sustained changes in performance.
Notes: Llama 3.1 70B, Cerebras: 8k context, Llama 3.1 70B, SambaNova: 8k context

Latency

Measured by Time (seconds) to First Token

Latency

Seconds to First Tokens Chunk Received; Lower is better
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Median: Figures represent median (P50) measurement over the past 14 days or otherwise to reflect sustained changes in performance.

Latency Variance

Seconds to First Tokens Chunk Received; Results by percentile; Lower median is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Boxplot: Shows variance of measurements
Picture of the author

Latency, Over Time

Seconds to First Tokens Chunk Received; Lower is better
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.

Latency by Input Token Count (Context Length)

Seconds to First Tokens Chunk Received; Lower is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Median: Figures represent median (P50) measurement over the past 14 days or otherwise to reflect sustained changes in performance.

Total Response Time

Time to receive 100 tokens output, calculated from latency and output speed metrics

Total Response Time vs. Price

Total: Response Time: Seconds to Output 100 Tokens; Price: Price: USD per 1M Tokens
Most attractive quadrant
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Total Response Time: Time to receive a 100 token response. Calculated based on Latency (time to receive first token) and Output Speed (output tokens per second).
Median: Figures represent median (P50) measurement over the past 14 days or otherwise to reflect sustained changes in performance.

Total Response Time

Seconds to Output 100 Tokens; Lower is better
Total Response Time: Time to receive a 100 token response. Calculated based on Latency (time to receive first token) and Output Speed (output tokens per second).
Median: Figures represent median (P50) measurement over the past 14 days or otherwise to reflect sustained changes in performance.

Total Response Time, Over Time

Seconds to Output 100 Tokens; Lower is better
Total Response Time: Time to receive a 100 token response. Calculated based on Latency (time to receive first token) and Output Speed (output tokens per second).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Notes: Llama 3.1 70B, Cerebras: 8k context, Llama 3.1 70B, SambaNova: 8k context

Total Response Time by Input Token Count (Context Length)

Seconds to Output 100 Tokens; Lower is better
100 input tokens
1k input tokens
10k input tokens
100k input tokens
Input Tokens Length: Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Total Response Time: Time to receive a 100 token response. Calculated based on Latency (time to receive first token) and Output Speed (output tokens per second).
Median: Figures represent median (P50) measurement over the past 14 days or otherwise to reflect sustained changes in performance.
Notes: Llama 3.1 70B, Cerebras: 8k context, Llama 3.1 70B, SambaNova: 8k context

API Features

API Features: Function (Tool) Calling & JSON Mode

ModelsFunction callingJSON Mode
Cerebras logoCerebras
Hyperbolic logoHyperbolic
Amazon logoAmazon
OctoAI logoOctoAI
Lepton AI logoLepton AI
Google Vertex logoGoogle Vertex
Azure logoAzure
Fireworks logoFireworks
Deepinfra logoDeepinfra
Groq logoGroq
SambaNova logoSambaNova
Databricks logoDatabricks
Perplexity logoPerplexity
Together.ai Turbo logoTogether.ai Turbo
Function (Tool) Calling: Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.
JSON Mode: Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.

Summary Table of Key Comparison Metrics

Features
Model Quality
Price
Output tokens/s
Latency
Cerebras logo
Meta logo
Llama 3.1 70B
8k
95
$0.60
588.5
0.39
Hyperbolic logo
Meta logo
Llama 3.1 70B
128k
95
$0.40
26.8
0.71
Amazon Bedrock logo
Meta logo
Llama 3.1 70B
128k
95
$0.99
31.5
0.69
OctoAI logo
Meta logo
Llama 3.1 70B
128k
95
$0.90
66.5
0.31
Lepton AI logo
Meta logo
Llama 3.1 70B
128k
95
$0.80
57.0
0.56
Google Vertex logo
Meta logo
Llama 3.1 70B Vertex
128k
95
$0.00
72.2
0.44
Microsoft Azure logo
Meta logo
Llama 3.1 70B
128k
95
$2.90
26.8
0.59
Fireworks logo
Meta logo
Llama 3.1 70B
128k
95
$0.90
112.5
0.41
Deepinfra logo
Meta logo
Llama 3.1 70B
128k
95
$0.36
25.7
0.34
Groq logo
Meta logo
Llama 3.1 70B
128k
95
$0.64
249.8
0.44
SambaNova logo
Meta logo
Llama 3.1 70B
8k
95
$0.75
443.6
0.79
Databricks logo
Meta logo
Llama 3.1 70B
128k
95
$1.50
55.2
0.57
Perplexity logo
Meta logo
Llama 3.1 70B
128k
95
$1.00
50.5
0.34
Together.ai Turbo logo
Meta logo
Llama 3.1 70B Turbo
128k
95
$0.88
30.7
0.69