Claude Instant: Intelligence, Performance & Price Analysis
Analysis of Anthropic's Claude Instant and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Anthropic has launched a newer model, Claude 2.0. We suggest considering this model instead of Claude Instant. See the following pages for a comparison of Claude 2.0 to other models and Claude 2.0 API provider benchmarks.
Comparison Summary
Intelligence:
Claude Instant is of lower quality compared to average, with a MMLU score of 0.434 and a Intelligence Index across evaluations of 14.
Price:Claude Instant is cheaper compared to average with a price of $1.20 per 1M Tokens (blended 3:1).
Claude Instant Input token price: $0.80, Output token price: $2.40 per 1M Tokens.
Speed:Claude Instant Input token price: $0.80, Output token price: $2.40 per 1M Tokens.
Claude Instant is slower compared to average, with a output speed of 62.9 tokens per second.
Latency:Claude Instant has a lower latency compared to average, taking 0.54s to receive the first token (TTFT).
Context Window:Claude Instant has a smaller context windows than average, with a context window of 100k tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Estimate (independent evaluation forthcoming)
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Claude Instant Model Details
Comparisons to Claude Instant
o4-mini (high)
o3
GPT-4.1
Llama 4 Maverick
Llama 4 Scout
Gemini 2.5 Pro Preview (May' 25)
Gemini 2.5 Flash Preview (May '25) (Reasoning)
Claude 4 Sonnet (Extended Thinking)
Claude 4 Sonnet
Claude 3.7 Sonnet (Extended Thinking)
Mistral Medium 3
DeepSeek V3 0324 (March '25)
DeepSeek R1 0528 (May '25)
Grok 3 mini Reasoning (high)
Nova Premier
Llama 3.1 Nemotron Ultra 253B v1 (Reasoning)
Qwen3 235B A22B (Reasoning)
GPT-4o (Nov '24)
DeepSeek R1 (Jan '25)
Further details