Gemini 2.0 Flash-Lite (Feb '25): Intelligence, Performance & Price Analysis
Analysis of Google's Gemini 2.0 Flash-Lite (Feb '25) and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Comparison Summary
Intelligence:
Gemini 2.0 Flash-Lite (Feb '25) is of higher quality compared to average, with a MMLU score of 0.724 and a Intelligence Index across evaluations of 41.
Price:Gemini 2.0 Flash-Lite (Feb '25) is cheaper compared to average with a price of $0.13 per 1M Tokens (blended 3:1).
Gemini 2.0 Flash-Lite (Feb '25) Input token price: $0.07, Output token price: $0.30 per 1M Tokens.
Speed:Gemini 2.0 Flash-Lite (Feb '25) Input token price: $0.07, Output token price: $0.30 per 1M Tokens.
Gemini 2.0 Flash-Lite (Feb '25) is faster compared to average, with a output speed of 179.3 tokens per second.
Latency:Gemini 2.0 Flash-Lite (Feb '25) has a lower latency compared to average, taking 0.25s to receive the first token (TTFT).
Context Window:Gemini 2.0 Flash-Lite (Feb '25) has a larger context windows than average, with a context window of 1.0M tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Comparisons to Gemini 2.0 Flash-Lite (Feb '25)
o1
GPT-4o (Nov '24)
GPT-4o mini
o3-mini (high)
GPT-4.5 (Preview)
Llama 3.3 Instruct 70B
Llama 3.1 Instruct 8B
Gemini 2.0 Pro Experimental (Feb '25)
Gemini 2.0 Flash (Feb '25)
Gemma 3 27B Instruct
Claude 3.5 Haiku
Claude 3.7 Sonnet (Extended Thinking)
Claude 3.7 Sonnet (Standard)
Mistral Large 2 (Nov '24)
Mistral Small 3
DeepSeek R1
DeepSeek V3
Grok 3 Reasoning Beta
Grok 3
Nova Pro
Nova Micro
Command A
QwQ 32B
Further details