Llama 3.1 Instruct 405B: Quality, Performance & Price Analysis
Analysis of Meta's Llama 3.1 Instruct 405B and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Comparison Summary
Quality:
Llama 3.1 405B is of higher quality compared to average, with a MMLU score of 0.886 and a Quality Index across evaluations of 74.
Price:Llama 3.1 405B is more expensive compared to average with a price of $3.50 per 1M Tokens (blended 3:1).
Llama 3.1 405B Input token price: $3.50, Output token price: $3.50 per 1M Tokens.
Speed:Llama 3.1 405B Input token price: $3.50, Output token price: $3.50 per 1M Tokens.
Llama 3.1 405B is slower compared to average, with a output speed of 28.9 tokens per second.
Latency:Llama 3.1 405B has a lower latency compared to average, taking 0.72s to receive the first token (TTFT).
Context Window:Llama 3.1 405B has a smaller context windows than average, with a context window of 130k tokens.
Highlights
Quality
Artificial Analysis Quality Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Further details