Independent analysis of AI language models and API providers

Understand the AI landscape and choose the best model and API provider for your use-case

Highlights

Quality
Quality Index; Higher is better
Speed
Throughput in Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better

Language Models Comparison Highlights

Quality Comparison by Ability

Varied metrics by ability categorization; Higher is better
General Ability (Chatbot Arena)
Reasoning & Knowledge (MMLU)
Reasoning & Knowledge (MT Bench)
Coding (HumanEval)
OpenAI's GPT-4 is no longer the clear quality leader with the launch of other models including Anthropic's Opus and Mistral's Large. Models have also been released which rival GPT-3.5 performance including Gemini Pro, Mixtral 8x7B and DBRX.
Median across providers: Figures represent median (P50) across all providers which support the model.

Quality vs. Throughput

Quality: General reasoning index, Throughput: Tokens per Second, Price: USD per 1M Tokens
Most attractive quadrant
Size represents Price (USD per M Tokens)
There is a trade-off between model quality and throughput, with higher quality models typically having lower throughput.
Quality: Index represents normalized average relative performance across Chatbot arena, MMLU & MT-Bench.
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median across providers: Figures represent median (P50) across all providers which support the model.

Quality vs. Price

Higher quality models are typically more expensive. However, model quality varies significantly and some open source models now achieve very high quality.
Quality: Index represents normalized average relative performance across Chatbot arena, MMLU & MT-Bench.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Median across providers: Figures represent median (P50) across all providers which support the model.

Throughput

Output Tokens per Second; Higher is better
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Median across providers: Figures represent median (P50) across all providers which support the model.

Pricing: Input and Output Prices

USD per 1M Tokens
Input price
Output price
Prices vary considerably, including between input and output token price. GPT-4 stands out as orders of magnitude higher priced than the cheapest models.
Input price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Output price: Price per token generated by the model (received from the API), represented as USD per million Tokens.
Median across providers: Figures represent median (P50) across all providers which support the model.

API Provider Highlights: Mixtral 8x7B Instruct

Charts below show providers for Mixtral 8x7B Instruct

Throughput vs. Price: Mixtral 8x7B Instruct

Throughput: Tokens per Second, Price: USD per 1M Tokens
Most attractive quadrant
Mistral
Amazon Bedrock
Groq
Together.ai
Perplexity
Fireworks
Lepton AI
Deepinfra
Replicate
OctoAI
Smaller, emerging providers are offering high throughput and at competitive prices.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Median: Figures represent median (P50) measurement over the past 14 days.
Variance data is present on the model and API provider pages amongst the detailed performance metrics. See 'Compare Models' and 'Compare API Providers' in the navigation menu for further analysis.

Pricing (Input and Output Prices): Mixtral 8x7B Instruct

Price: USD per 1M Tokens; Lower is better
Input price
Output price
Providers typically charge different prices for input and output tokens. The ratio of input / output token price for a certain use-case may significantly impact overall costs.
Input price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Output price: Price per token generated by the model (received from the API), represented as USD per million Tokens.

Throughput, Over Time: Mixtral 8x7B Instruct

Output Tokens per Second; Higher is better
Mistral
Amazon Bedrock
Groq
Together.ai
Perplexity
Fireworks
Lepton AI
Deepinfra
Replicate
OctoAI
Smaller, emerging providers offer high throughput, though precise speeds delivered vary day-to-day.
Throughput: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API).
Over time measurement: Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
See more information on any of our supported models
Model NameFurther analysis
OpenAI logo
OpenAI logoGPT-4
OpenAI logoGPT-4 Turbo
OpenAI logoGPT-4 Turbo (Vision)
OpenAI logoGPT-3.5 Turbo
OpenAI logoGPT-3.5 Turbo Instruct
Meta logo
Meta logoLlama 3 Instruct (70B)
Meta logoLlama 2 Chat (13B)
Meta logoLlama 2 Chat (70B)
Meta logoLlama 3 Instruct (8B)
Meta logoLlama 2 Chat (7B)
Meta logoCode Llama Instruct (70B)
Mistral logo
Mistral logoMistral Large
Mistral logoMistral Medium
Mistral logoMixtral 8x22B Instruct
Mistral logoMixtral 8x7B Instruct
Mistral logoMistral Small
Mistral logoMistral 7B Instruct
Google logo
Google logoGemini 1.5 Pro
Google logoGemini 1.0 Pro
Google logoGemma 7B Instruct
Anthropic logo
Anthropic logoClaude 3 Opus
Anthropic logoClaude 3 Sonnet
Anthropic logoClaude 3 Haiku
Anthropic logoClaude 2.1
Anthropic logoClaude 2.0
Anthropic logoClaude Instant
Cohere logo
Cohere logoCommand-R+
Cohere logoCommand-R
Cohere logoCommand
Cohere logoCommand Light
Databricks logo
Databricks logoDBRX Instruct
OpenChat logo
OpenChat logoOpenChat 3.5 (1210)
Perplexity logo
Perplexity logoPPLX-70B Online
Perplexity logoPPLX-7B-Online