LLM Leaderboard - Comparison of GPT-4o, Llama 3, Mistral, Gemini and over 30 models
Comparison and ranking the performance of over 30 AI models (LLMs) across key metrics including quality, price, performance and speed (output speed - tokens per second & latency - TTFT), context window & others. For more details including relating to our methodology, see our FAQs.
HIGHLIGHTS
Features | Quality | Price | Output tokens/s | Latency | |||
---|---|---|---|---|---|---|---|
Further Analysis | |||||||
o1-preview | 128k | 85 | $26.25 | 30.8 | 32.56 | ||
o1-mini | 128k | 82 | $5.25 | 69.2 | 14.75 | ||
GPT-4o | 128k | 77 | $4.38 | 118.4 | 0.44 | ||
GPT-4o (May '24) | 128k | 77 | $7.50 | 111.0 | 0.43 | ||
GPT-4o mini | 128k | 71 | $0.26 | 97.7 | 0.46 | ||
Llama 3.1 405B | 128k | 72 | $4.50 | 22.2 | 0.88 | ||
Llama 3.2 90B (Vision) | 128k | 66 | $0.90 | 39.8 | 0.51 | ||
Llama 3.1 70B | 128k | 65 | $0.89 | 52.1 | 0.54 | ||
Llama 3.2 11B (Vision) | 128k | 54 | $0.19 | 117.3 | 0.35 | ||
Llama 3.1 8B | 128k | 53 | $0.15 | 163.2 | 0.38 | ||
Llama 3.2 3B | 128k | 47 | $0.08 | 151.2 | 0.35 | ||
Llama 3.2 1B | 128k | 27 | $0.05 | 555.7 | 0.35 | ||
Gemini 1.5 Pro (Sep '24) | 2m | 80 | $2.19 | 60.3 | 0.77 | ||
Gemini 1.5 Flash-8B | 1m | $0.07 | 284.5 | 0.35 | |||
Gemini 1.5 Flash (Sep '24) | 1m | 73 | $0.13 | 209.5 | 0.37 | ||
Gemma 2 27B | 8k | 61 | $0.80 | 68.8 | 0.34 | ||
Gemma 2 9B | 8k | 46 | $0.20 | 119.3 | 0.29 | ||
Gemini 1.5 Flash (May '24) | 1m | $0.13 | 306.4 | 0.33 | |||
Gemini 1.5 Pro (May '24) | 2m | $5.25 | 65.1 | 0.77 | |||
Claude 3.5 Sonnet | 200k | 77 | $6.00 | 57.8 | 0.88 | ||
Claude 3 Opus | 200k | 70 | $30.00 | 26.5 | 2.11 | ||
Claude 3 Haiku | 200k | 54 | $0.50 | 132.1 | 0.47 | ||
Mistral Large 2 | 128k | 73 | $4.50 | 36.1 | 0.49 | ||
Mixtral 8x22B | 65k | 61 | $1.20 | 68.7 | 0.40 | ||
Mistral Small (Sep '24) | 128k | 60 | $0.30 | 61.8 | 0.48 | ||
Pixtral 12B | 128k | 56 | $0.13 | 79.8 | 0.51 | ||
Mistral NeMo | 128k | 52 | $0.15 | 109.3 | 0.36 | ||
Mixtral 8x7B | 33k | 42 | $0.50 | 90.3 | 0.36 | ||
Codestral-Mamba | 256k | 36 | $0.25 | 94.8 | 0.54 | ||
Command-R+ | 128k | 56 | $5.19 | 47.0 | 0.51 | ||
Command-R | 128k | 51 | $0.51 | 105.9 | 0.34 | ||
Command-R+ (Apr '24) | 128k | 46 | $6.00 | 45.5 | 0.56 | ||
Command-R (Mar '24) | 128k | 36 | $0.75 | 106.4 | 0.37 | ||
Sonar 3.1 Small | 131k | $0.20 | 139.3 | 0.31 | |||
Sonar 3.1 Large | 131k | $1.00 | 56.1 | 0.33 | |||
Phi-3 Medium 14B | 128k | $0.30 | 51.2 | 0.44 | |||
Solar Mini | 4k | 48 | $0.15 | 89.0 | 1.07 | ||
Solar Pro | 4k | 61 | $0.25 | 51.4 | 1.19 | ||
DBRX | 33k | 49 | $1.16 | 85.8 | 0.46 | ||
Reka Core | 128k | 57 | $3.00 | 14.8 | 1.14 | ||
Reka Flash | 128k | 46 | $0.35 | 30.0 | 1.20 | ||
Reka Edge | 64k | 30 | $0.10 | 35.2 | 0.91 | ||
Jamba 1.5 Large | 256k | 64 | $3.50 | 51.3 | 0.71 | ||
Jamba 1.5 Mini | 256k | 46 | $0.25 | 82.7 | 0.51 | ||
DeepSeek-Coder-V2 | 128k | 67 | $0.17 | 16.6 | 1.16 | ||
DeepSeek-V2 | 128k | 66 | $0.17 | 16.8 | 1.17 | ||
DeepSeek-V2.5 | 128k | 66 | $1.09 | 14.5 | 1.06 | ||
Qwen2.5 72B | 131k | 75 | $0.38 | 35.0 | 0.52 | ||
Qwen2 72B | 128k | 69 | $0.90 | 56.1 | 0.42 | ||
Yi-Large | 32k | 58 | $3.00 | 63.3 | 0.45 | ||
GPT-4 Turbo | 128k | 74 | $15.00 | 35.9 | 0.64 | ||
GPT-3.5 Turbo | 16k | 52 | $0.75 | 87.7 | 0.43 | ||
GPT-3.5 Turbo Instruct | 4k | $1.63 | 114.4 | 0.58 | |||
GPT-4 | 8k | $37.50 | 26.9 | 0.61 | |||
Llama 3 70B | 8k | 62 | $0.90 | 52.4 | 0.49 | ||
Llama 3 8B | 8k | 46 | $0.15 | 102.1 | 0.40 | ||
Llama 2 Chat 70B | 4k | 39 | $1.39 | 48.4 | 0.45 | ||
Llama 2 Chat 13B | 4k | 36 | $0.30 | 53.5 | 0.39 | ||
Llama 2 Chat 7B | 4k | $0.33 | 125.8 | 0.54 | |||
Gemini 1.0 Pro | 33k | $0.75 | 95.9 | 1.20 | |||
Claude 3 Sonnet | 200k | 57 | $6.00 | 63.0 | 0.77 | ||
Mistral Large | 33k | 56 | $6.00 | 36.4 | 0.57 | ||
Mistral Small (Feb '24) | 33k | 50 | $1.50 | 53.5 | 0.45 | ||
Mistral 7B | 33k | 24 | $0.16 | 106.3 | 0.35 | ||
Codestral | 33k | $0.30 | 46.8 | 0.48 | |||
Mistral Medium | 33k | $4.09 | 38.0 | 0.76 | |||
OpenChat 3.5 | 8k | 43 | $0.06 | 76.6 | 0.32 | ||
Jamba Instruct | 256k | 28 | $0.55 | 74.3 | 0.58 |
Key definitions
Models compared: OpenAI: GPT-3.5 Turbo, GPT-3.5 Turbo (0125), GPT-3.5 Turbo (1106), GPT-3.5 Turbo Instruct, GPT-4, GPT-4 Turbo, GPT-4 Turbo (0125), GPT-4 Vision, GPT-4o, GPT-4o (May '24), GPT-4o mini, o1-mini, and o1-preview, Meta: Code Llama 70B, Llama 2 Chat 13B, Llama 2 Chat 70B, Llama 2 Chat 7B, Llama 3 70B, Llama 3 8B, Llama 3.1 405B, Llama 3.1 70B, Llama 3.1 8B, Llama 3.2 11B (Vision), Llama 3.2 1B, Llama 3.2 3B, and Llama 3.2 90B (Vision), Google: Gemini 1.0 Pro, Gemini 1.5 Flash (May '24), Gemini 1.5 Flash (Sep '24), Gemini 1.5 Flash-8B, Gemini 1.5 Pro (May '24), Gemini 1.5 Pro (Sep '24), Gemma 2 27B, Gemma 2 9B, and Gemma 7B, Anthropic: Claude 2.0, Claude 2.1, Claude 3 Haiku, Claude 3 Opus, Claude 3 Sonnet, Claude 3.5 Sonnet, and Claude Instant, Mistral: Codestral, Codestral-Mamba, Mistral 7B, Mistral Large, Mistral Large 2, Mistral Medium, Mistral NeMo, Mistral Small (Feb '24), Mistral Small (Sep '24), Mixtral 8x22B, Mixtral 8x7B, and Pixtral 12B, Cohere: Command, Command Light, Command-R, Command-R (Mar '24), Command-R+ (Apr '24), and Command-R+, Perplexity: PPLX-70B Online, PPLX-7B-Online, Sonar 3.1 Large, Sonar 3.1 Small , Sonar Large, and Sonar Small, xAI: Grok-1, OpenChat: OpenChat 3.5, Microsoft Azure: Phi-3 Medium 14B and Phi-3 Mini, Upstage: Solar Mini and Solar Pro, Databricks: DBRX, Reka AI: Reka Core, Reka Edge, and Reka Flash, Other: LLaVA-v1.5-7B, AI21 Labs: Jamba 1.5 Large, Jamba 1.5 Mini, and Jamba Instruct, DeepSeek: DeepSeek-Coder-V2, DeepSeek-V2, and DeepSeek-V2.5, Snowflake: Arctic, Alibaba: Qwen2 72B and Qwen2.5 72B, and 01.AI: Yi-Large.