Comparison of Models: Quality, Performance & Price Analysis
Comparison and analysis of AI models across key performance metrics including quality, price, output speed, latency, context window & others. Click on any model to see detailed metrics. For more details including relating to our methodology, see our FAQs.
Model Comparison Summary
Highlights
Models compared: OpenAI: GPT-3.5 Turbo, GPT-3.5 Turbo (0125), GPT-3.5 Turbo (1106), GPT-3.5 Turbo Instruct, GPT-4, GPT-4 Turbo, GPT-4 Turbo (0125), GPT-4 Vision, GPT-4o, GPT-4o (May '24), GPT-4o mini, o1-mini, and o1-preview, Meta: Code Llama 70B, Llama 2 Chat 13B, Llama 2 Chat 70B, Llama 2 Chat 7B, Llama 3 70B, Llama 3 8B, Llama 3.1 405B, Llama 3.1 70B, Llama 3.1 8B, Llama 3.2 11B (Vision), Llama 3.2 1B, Llama 3.2 3B, and Llama 3.2 90B (Vision), Google: Gemini 1.0 Pro, Gemini 1.5 Flash (May '24), Gemini 1.5 Flash (Sep '24), Gemini 1.5 Flash-8B, Gemini 1.5 Pro (May '24), Gemini 1.5 Pro (Sep '24), Gemma 2 27B, Gemma 2 9B, and Gemma 7B, Anthropic: Claude 2.0, Claude 2.1, Claude 3 Haiku, Claude 3 Opus, Claude 3 Sonnet, Claude 3.5 Sonnet, and Claude Instant, Mistral: Codestral, Codestral-Mamba, Mistral 7B, Mistral Large, Mistral Large 2, Mistral Medium, Mistral NeMo, Mistral Small (Feb '24), Mistral Small (Sep '24), Mixtral 8x22B, Mixtral 8x7B, and Pixtral 12B, Cohere: Command, Command Light, Command-R, Command-R (Mar '24), Command-R+ (Apr '24), and Command-R+, Perplexity: PPLX-70B Online, PPLX-7B-Online, Sonar 3.1 Large, Sonar 3.1 Small , Sonar Large, and Sonar Small, xAI: Grok-1, OpenChat: OpenChat 3.5, Microsoft Azure: Phi-3 Medium 14B and Phi-3 Mini, Upstage: Solar Mini and Solar Pro, Databricks: DBRX, Reka AI: Reka Core, Reka Edge, and Reka Flash, Other: LLaVA-v1.5-7B, AI21 Labs: Jamba 1.5 Large, Jamba 1.5 Mini, and Jamba Instruct, DeepSeek: DeepSeek-Coder-V2, DeepSeek-V2, and DeepSeek-V2.5, Snowflake: Arctic, Alibaba: Qwen2 72B and Qwen2.5 72B, and 01.AI: Yi-Large.