Microsoft Azure has launched a newer model, Phi-4 Mini, we suggest considering this model instead.
For more information, see Comparison of Phi-4 Mini to other models and API provider benchmarks for Phi-4 Mini.
Phi-3 Mini Instruct 3.8B API Provider Benchmarking & Analysis
Analysis of API providers for Phi-3 Mini Instruct 3.8B across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include .
Fastest
Output speed
Total 0 providers
Lowest Latency
Time to first answer token
Total 0 providers
Lowest Price
Blended price (per 1M tokens)
Total 0 providers
No API providers are currently available for Phi-3 Mini.
Benchmarks of providers are not available for this model.
Please see the models page for Phi-3 Mini Instruct 3.8B for details of the model and its intelligence compared to other models.
Highlights
Update: Default performance benchmarking workload has updated to 10k input tokens to better reflect production use cases. You can still select different workloads above!
Pricing
Pricing: Input and Output Prices: Phi-3 Mini Providers
Speed vs. Price: Phi-3 Mini Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed: Phi-3 Mini Providers
Latency vs. Output Speed: Phi-3 Mini Providers
Latency
Measured by Time (seconds) to First Token
Time to First Token: Phi-3 Mini Providers
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: Phi-3 Mini Providers
API Features
Function (Tool) Calling & JSON Mode: Phi-3 Mini Providers
No comments available
Please check back later or adjust your filters.
Context Window: Phi-3 Mini Providers
Summary Table of Key Comparison Metrics
Frequently Asked Questions
Common questions about Phi-3 Mini Instruct 3.8B providers
Phi-3 Mini Instruct 3.8B is not currently available through any API providers we benchmark. As an open weights model, it can be self-hosted.