Llama 2 Chat (13B): API Provider Benchmarking & Analysis
Analysis of API providers for Llama 2 Chat (13B) across performance metrics including latency (time to first token), throughput (tokens per second), price and others. API providers benchmarked include Microsoft Azure, Amazon Bedrock, Together.ai, Fireworks, Deepinfra, Replicate, and OctoAI.
Comparison Summary
Throughput (tokens/s): Fireworks (144 t/s) and Replicate (78 t/s) are the fastest providers of Llama 2 Chat (13B), followed by Amazon, Together.ai & OctoAI.Latency (TTFT): OctoAI (0.23s) and Fireworks (0.27s) have the lowest latency for Llama 2 Chat (13B), followed by Amazon, Deepinfra & Together.ai.Blended Price ($/M tokens): Fireworks ($0.20) and Replicate ($0.20) are the most cost-effective providers for Llama 2 Chat (13B), followed by Together.ai, OctoAI & Deepinfra.Input Token Price: Replicate ($0.10) and Fireworks ($0.20) offer the lowest input token prices for Llama 2 Chat (13B), followed by OctoAI, Together.ai & Deepinfra.Output Token Price: Fireworks ($0.20) and Together.ai ($0.23) offer the lowest output token prices for Llama 2 Chat (13B), followed by Deepinfra, Replicate & OctoAI.
Highlights
Quality
Quality Index; Higher is better
Speed
Throughput in Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries: (Beta)
Prompt Length:
Note: Long prompts not supported as a context window of at least 10k tokens is required
Summary analysis
Throughput vs. Price
Throughput: Tokens per Second, Price: USD per 1M Tokens
Most attractive quadrant
Microsoft Azure
Amazon Bedrock
Together.ai
Fireworks
Deepinfra
Replicate
OctoAI
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 14 days.
Latency vs. Throughput
Latency: Seconds to First Tokens Chunk Received, Throughput: Tokens per Second
Most attractive quadrant
Size represents Price (USD per M Tokens)
Microsoft Azure
Amazon Bedrock
Together.ai
Fireworks
Deepinfra
Replicate
OctoAI
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Latency: Time to first token of tokens received, in seconds, after API request sent.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 14 days.
Pricing
Pricing: Input and Output Prices
USD per 1M Tokens; Lower is better
Input price
Output price
Input price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Output price: Price per token generated by the model (received from the API), represented as USD per million Tokens.
Speed
Measured by Throughput (tokens per second)
Throughput
Output Tokens per Second; Higher is better
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Median: Figures represent median (P50) measurement over the past 14 days.
Throughput Variance
Output Tokens per Second; Results by percentile; Higher median is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Boxplot: Shows variance of measurements
Throughput, Over Time
Output Tokens per Second; Higher is better
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Latency
Measured by Time (seconds) to First Token
Latency
Seconds to First Tokens Chunk Received; Lower is better
Latency: Time to first token of tokens received, in seconds, after API request sent.
Median: Figures represent median (P50) measurement over the past 14 days.
Latency Variance
Seconds to First Tokens Chunk Received; Results by percentile; Lower median is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Latency: Time to first token of tokens received, in seconds, after API request sent.
Boxplot: Shows variance of measurements
Latency, Over Time
Seconds to First Tokens Chunk Received; Lower is better
Latency: Time to first token of tokens received, in seconds, after API request sent.
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Total Response Time
Time to receive 100 tokens output, calculated by latency and throughput metrics
Total Response Time vs. Price
Total Response Time: Seconds to Output 100 Tokens, Price: USD per 1M Tokens
Most attractive quadrant
Microsoft Azure
Amazon Bedrock
Together.ai
Fireworks
Deepinfra
Replicate
OctoAI
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Total Response Time: Time to receive a 100 token response. Estimated based on Latency (time to receive first chunk) and Throughput (tokens per second).
Median: Figures represent median (P50) measurement over the past 14 days.
Total Response Time
Seconds to Output 100 Tokens; Lower is better
Total Response Time: Time to receive a 100 token response. Estimated based on Latency (time to receive first chunk) and Throughput (tokens per second).
Median: Figures represent median (P50) measurement over the past 14 days.
Total Response Time, Over Time
Seconds to Output 100 Tokens; Lower is better
Total Response Time: Time to receive a 100 token response. Estimated based on Latency (time to receive first chunk) and Throughput (tokens per second).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Summary Table of Key Comparison Metrics
Context | Model Quality | Price | Throughput | Latency | |
---|---|---|---|---|---|
4k | 37 | $0.84 | 42.0 | 1.22 | |
4k | 37 | $0.81 | 53.0 | 0.29 | |
4k | 37 | $0.23 | 50.3 | 0.30 | |
4k | 37 | $0.20 | 144.2 | 0.27 | |
4k | 37 | $0.35 | 41.1 | 0.29 | |
4k | 37 | $0.20 | 78.4 | 1.22 | |
4k | 37 | $0.28 | 50.1 | 0.23 |