Phi-4 Mini Instruct: Intelligence, Performance & Price Analysis
Analysis of Microsoft Azure's Phi-4 Mini Instruct and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Comparison Summary
Intelligence:
Phi-4 Mini is of lower intelligence compared to average, with Artificial Analysis Intelligence Index of 16.
Price:Speed:Phi-4 Mini is slower compared to average, with a output speed of 47.8 tokens per second.
Latency:Phi-4 Mini has a lower latency compared to average, taking 0.32s to receive the first token (TTFT).
Context Window:Phi-4 Mini has a smaller context windows than average, with a context window of 130k tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Estimate (independent evaluation forthcoming)
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Phi-4 Mini Instruct Model Details
Comparisons to Phi-4 Mini
Phi-4 Mini
GPT-4.1
gpt-oss-120B (high)
GPT-5 (minimal)
GPT-5 (high)
gpt-oss-20B (high)
o3
Llama 4 Maverick
Gemini 2.5 Pro
Gemini 2.5 Flash (Reasoning)
Claude 4 Sonnet Thinking
Claude 4.1 Opus Thinking
Magistral Small
DeepSeek R1 0528
DeepSeek V3.1 (Non-reasoning)
DeepSeek V3.1 (Reasoning)
Grok Code Fast 1
Grok 4
Solar Pro 2 (Reasoning)
Llama Nemotron Super 49B v1.5 (Reasoning)
Kimi K2
EXAONE 4.0 32B (Reasoning)
GLM-4.5
Qwen3 235B 2507 (Reasoning)
Further details