Phi-4 Multimodal Instruct: Intelligence, Performance & Price Analysis
Analysis of Microsoft Azure's Phi-4 Multimodal Instruct and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Comparison Summary
Intelligence:
Phi-4 Multimodal is of lower quality compared to average, with a MMLU score of 0.485 and a Intelligence Index across evaluations of 27.
Price:Speed:Phi-4 Multimodal is slower compared to average, with a output speed of 25.1 tokens per second.
Latency:Phi-4 Multimodal has a lower latency compared to average, taking 0.33s to receive the first token (TTFT).
Context Window:Phi-4 Multimodal has a smaller context windows than average, with a context window of 130k tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Phi-4 Multimodal Instruct Model Details
Comparisons to Phi-4 Multimodal
GPT-4o mini
GPT-4o (March 2025, chatgpt-4o-latest)
o3-mini (high)
o1-pro
Llama 3.3 Instruct 70B
Llama 4 Maverick
Llama 4 Scout
Gemini 2.0 Flash (Feb '25)
Gemma 3 27B Instruct
Gemini 2.5 Pro Experimental (Mar' 25)
Claude 3.7 Sonnet (Extended Thinking)
Claude 3.7 Sonnet (Standard)
Mistral Large 2 (Nov '24)
Mistral Small 3.1
DeepSeek R1
DeepSeek V3 0324 (Mar' 25)
Grok 3
Grok 3 Reasoning Beta
Nova Pro
Nova Micro
Command A
DeepSeek V3 (Dec '24)
Further details