Gemini 2.0 Pro Experimental (Feb '25): Intelligence, Performance & Price Analysis
Analysis of Google's Gemini 2.0 Pro Experimental (Feb '25) and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Comparison Summary
Intelligence:
Gemini 2.0 Pro Experimental is of higher quality compared to average, with a MMLU score of 0.805 and a Intelligence Index across evaluations of 49.
Price:Speed:Gemini 2.0 Pro Experimental is slower compared to average, with a output speed of 73.2 tokens per second.
Latency:Gemini 2.0 Pro Experimental has a higher latency compared to average, taking 17.11s to receive the first token (TTFT).
Context Window:Gemini 2.0 Pro Experimental has a larger context windows than average, with a context window of 2.0M tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Gemini 2.0 Pro Experimental (Feb '25) Model Details
Comparisons to Gemini 2.0 Pro Experimental
Claude 4 Sonnet
Claude 4 Sonnet (Extended Thinking)
o4-mini (high)
GPT-4.1
o3
Llama 4 Maverick
Llama 4 Scout
Gemini 2.5 Flash Preview (May '25) (Reasoning)
Gemini 2.5 Pro Preview (Mar' 25)
Claude 3.7 Sonnet (Extended Thinking)
Mistral Medium 3
DeepSeek R1
DeepSeek V3 0324 (Mar' 25)
Grok 3 mini Reasoning (high)
Nova Premier
Qwen3 235B A22B (Reasoning)
GPT-4o (Nov '24)
Further details