logo

Mistral 7B Instruct: API Provider Benchmarking & Analysis

Analysis of API providers for Mistral 7B Instruct across performance metrics including latency (time to first token), throughput (tokens per second), price and others. API providers benchmarked include Mistral, Amazon Bedrock, Together.ai, Perplexity, Fireworks, Baseten, Deepinfra, Replicate, and OctoAI.
For comparison of Mistral 7B to other models, see
Owner:
Mistral
License:
Open
Context window:
33k
Link:

Comparison Summary

Throughput (tokens/s):Fireworks logo Fireworks (254 t/s) and  Baseten logo Baseten (231 t/s) are the fastest providers of Mistral 7B, followed by  Perplexity logo Perplexity,  Replicate logo Replicate &  OctoAI logo OctoAI.Latency (TTFT):Baseten logo Baseten (0.13s) and  Fireworks logo Fireworks (0.18s) have the lowest latency for Mistral 7B, followed by  Perplexity logo Perplexity,  Mistral logo Mistral &  OctoAI logo OctoAI.Blended Price ($/M tokens):Replicate logo Replicate ($0.10) and  Deepinfra logo Deepinfra ($0.13) are the most cost-effective providers for Mistral 7B, followed by  OctoAI logo OctoAI,  Amazon logo Amazon &  Together.ai logo Together.ai.Input Token Price:Replicate logo Replicate ($0.05) and  OctoAI logo OctoAI ($0.10) offer the lowest input token prices for Mistral 7B, followed by  Deepinfra logo Deepinfra,  Amazon logo Amazon &  Together.ai logo Together.ai.Output Token Price:Deepinfra logo Deepinfra ($0.13) and  Amazon logo Amazon ($0.20) offer the lowest output token prices for Mistral 7B, followed by  Together.ai logo Together.ai,  Perplexity logo Perplexity &  Fireworks logo Fireworks.

Highlights

Quality
Quality Index; Higher is better
Speed
Throughput in Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries: (Beta)
Prompt Length:

Summary analysis

Throughput vs. Price

Throughput: Tokens per Second, Price: USD per 1M Tokens
Most attractive quadrant
Mistral
Amazon Bedrock
Together.ai
Perplexity
Fireworks
Baseten
Deepinfra
Replicate
OctoAI
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 14 days.

Latency vs. Throughput

Latency: Seconds to First Tokens Chunk Received, Throughput: Tokens per Second
Most attractive quadrant
Size represents Price (USD per M Tokens)
Mistral
Amazon Bedrock
Together.ai
Perplexity
Fireworks
Baseten
Deepinfra
Replicate
OctoAI
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Latency: Time to first token of tokens received, in seconds, after API request sent.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median: Figures represent median (P50) measurement over the past 14 days.

Pricing: Input and Output Prices

USD per 1M Tokens; Lower is better
Input price
Output price
Input price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Output price: Price per token generated by the model (received from the API), represented as USD per million Tokens.

Speed

Measured by Throughput (tokens per second)

Throughput

Output Tokens per Second; Higher is better
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Median: Figures represent median (P50) measurement over the past 14 days.

Throughput Variance

Output Tokens per Second; Results by percentile; Higher median is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Boxplot: Shows variance of measurements
Picture of the author

Throughput, Over Time

Output Tokens per Second; Higher is better
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.

Latency

Measured by Time (seconds) to First Token

Latency

Seconds to First Tokens Chunk Received; Lower is better
Latency: Time to first token of tokens received, in seconds, after API request sent.
Median: Figures represent median (P50) measurement over the past 14 days.

Latency Variance

Seconds to First Tokens Chunk Received; Results by percentile; Lower median is better
Median, Other points represent 5th, 25th, 75th, 95th Percentiles respectively
Latency: Time to first token of tokens received, in seconds, after API request sent.
Boxplot: Shows variance of measurements
Picture of the author

Latency, Over Time

Seconds to First Tokens Chunk Received; Lower is better
Latency: Time to first token of tokens received, in seconds, after API request sent.
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.

Total Response Time

Time to receive 100 tokens output, calculated by latency and throughput metrics

Total Response Time vs. Price

Total Response Time: Seconds to Output 100 Tokens, Price: USD per 1M Tokens
Most attractive quadrant
Mistral
Amazon Bedrock
Together.ai
Perplexity
Fireworks
Baseten
Deepinfra
Replicate
OctoAI
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Total Response Time: Time to receive a 100 token response. Estimated based on Latency (time to receive first chunk) and Throughput (tokens per second).
Median: Figures represent median (P50) measurement over the past 14 days.

Total Response Time

Seconds to Output 100 Tokens; Lower is better
Total Response Time: Time to receive a 100 token response. Estimated based on Latency (time to receive first chunk) and Throughput (tokens per second).
Median: Figures represent median (P50) measurement over the past 14 days.

Total Response Time, Over Time

Seconds to Output 100 Tokens; Lower is better
Total Response Time: Time to receive a 100 token response. Estimated based on Latency (time to receive first chunk) and Throughput (tokens per second).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.

Summary Table of Key Comparison Metrics

Context
Model Quality
Price
Throughput
Latency
Mistral logo
33k
40
$0.25
63.3
0.20
Amazon Bedrock logo
33k
40
$0.16
71.7
0.29
Together.ai logo
8k
40
$0.20
77.8
0.41
Perplexity logo
16k
40
$0.20
104.4
0.20
Fireworks logo
33k
40
$0.20
253.9
0.18
Baseten logo
4k
40
$0.20
231.0
0.13
Deepinfra logo
33k
40
$0.13
44.0
0.44
Replicate logo
33k
40
$0.10
79.1
1.50
OctoAI logo
33k
40
$0.14
78.4
0.21