Speech to Text AI Model & Provider Leaderboard
Compare word error rate, speed, and pricing across Speech to Text models and providers. Our comprehensive analysis helps you choose the best Speech to Text model for your specific use case and requirements.
For further details, see our methodology page.
Artificial Analysis Word Error Rate (AA-WER) Index
Artificial Analysis Word Error Rate (AA-WER) Index
AA-WER by Dataset
AA-WER: AA-AgentTalk Dataset
Cleaned Dataset Comparison
VoxPopuli: Cleaned vs Original Subset of Publicly Available Data
API Benchmarks
Artificial Analysis Word Error Rate Index vs. Price
Speed Factor
Price of Transcription
| Provider | Model | Whisper version | Footnotes | Word Error Rate (%) | Median Speed Factor | Price (USD per 1000 minutes) | Further Details |
|---|---|---|---|---|---|---|---|
| Whisper Large v2 | large-v2 | 4.2% | 29.4 | 6.00 | |||
| Wizper Large v3 | large-v3 | 4.9% | 222.0 | 0.50 | |||
| Incredibly Fast Whisper | large-v3 | 5.8% | 51.3 | 1.49 | |||
| Whisper Large v3 | large-v3 | 10.2% | 2.8 | 4.23 | |||
| Whisper Large v3 | large-v3 | 4.3% | 62.7 | 1.15 | |||
| Whisper Large v3 Turbo | v3 Turbo | 4.8% | 258.8 | 0.67 | |||
| Whisper Large v3 | large-v3 | 4.8% | 365.3 | 1.00 | |||
| Whisper Large v3 Turbo | v3 Turbo | 4.8% | 320.9 | 1.00 | |||
| Whisper Large v3 | large-v3 | 7.4% | 114.6 | 1.50 | |||
| Speechmatics Standard | 5.3% | 45.7 | 4.00 | ||||
| Speechmatics Enhanced | 4.3% | 44.5 | 6.70 | ||||
| Nova-2 | 5.6% | 503.8 | 4.30 | ||||
| Base | 10.9% | 574.1 | 12.50 | ||||
| Nova-3 | 6.5% | 347.5 | 4.30 | ||||
| Universal, AssemblyAI | 4.0% | 115.2 | 2.50 | ||||
| Slam-1 | 4.1% | 85.2 | 4.50 | ||||
| Universal-3 Pro | 3.3% | 34.3 | 3.50 | ||||
| Amazon Transcribe | 4.3% | 16.7 | 24.00 | ||||
| Chirp 2, Google | 6.0% | 14.6 | 16.00 | ||||
| Chirp | 31.3% | 13.0 | 16.00 | ||||
| Chirp 3, Google | 4.6% | 28.9 | 16.00 | ||||
| Scribe v1 | 3.2% | 36.2 | 6.67 | ||||
| Scribe v2 | 2.3% | 30.9 | 6.67 | ||||
| Gemini 2.0 Flash | 4.0% | 52.9 | 1.40 | ||||
| Gemini 2.0 Flash Lite | 4.0% | 50.0 | 0.19 | ||||
| Gemini 2.5 Flash Lite | 5.3% | 69.2 | 6.56 | ||||
| Gemini 2.5 Flash | 5.3% | 52.4 | 6.66 | ||||
| Gemini 2.5 Pro | 3.1% | 11.9 | 11.39 | ||||
| Gemini 3 Pro | 2.9% | 5.6 | 18.40 | ||||
| Gemini 3 Flash | 3.1% | 14.5 | 13.70 | ||||
| GPT-4o Transcribe | 4.1% | 33.9 | 6.00 | ||||
| GPT-4o Mini Transcribe | 4.6% | 48.6 | 3.00 | ||||
| Parakeet RNNT 1.1B | 5.0% | 6.2 | 1.91 | ||||
| Parakeet TDT 0.6B V2, NVIDIA | 6.8% | 92.2 | 0.00 | ||||
| Canary Qwen 2.5B, NVIDIA | 4.4% | 5.7 | 0.74 | ||||
| Voxtral Mini | 3.7% | 70.7 | 1.00 | ||||
| Voxtral Small | 3.0% | 67.2 | 4.00 | ||||
| Voxtral Mini | 4.0% | 69.8 | 1.00 | ||||
| Solaria-1, Gladia | 4.2% | 51.2 | 4.07 | ||||
| Nova 2 Omni | 5.9% | 34.9 | 1.85 | ||||
| Nova 2 Pro | 5.0% | 23.3 | 3.10 | ||||
| Pulse STT | 6.0% | 147.5 | 8.00 |
Frequently Asked Questions
Common questions about Speech to Text models and providers
Scribe v2, ElevenLabs leads with the lowest AA-WER (Artificial Analysis Word Error Rate) of 2.3% across 43 models evaluated.
The top speech to text models by accuracy (AA-WER) are: 1. Scribe v2, ElevenLabs (2.3%), 2. Gemini 3 Pro, Google (2.9%), 3. Voxtral Small, Mistral (3.0%), 4. Gemini 2.5 Pro, Google (3.1%), 5. Gemini 3 Flash, Google (3.1%). Lower AA-WER indicates better transcription accuracy.
Base is the fastest with a speed factor of 574.1x real-time, followed by Nova-2 (503.8x) and Whisper (L, v3), Fireworks (365.3x). Higher speed factors mean faster transcription.
Gemini 2.0 Flash Lite is the most affordable at $0.19 per 1,000 minutes, followed by Wizper (L, v3), fal.ai ($0.50) and Whisper (L, v3, Turbo), Groq ($0.667).
Voxtral Small, Mistral is the most accurate open weights model with an AA-WER of 3.0%. There are 12 open weights models out of 43 total evaluated.
The top open weights speech to text models by accuracy are: 1. Voxtral Small, Mistral (AA-WER 3.0%), 2. Voxtral Mini Transcribe 2, Mistral (AA-WER 3.6%), 3. Voxtral Mini, Mistral (AA-WER 3.7%).
The best model depends on your priorities. Use the scatter plots to visualize trade-offs between accuracy (AA-WER), speed, and price. For applications requiring high accuracy, prioritize models with lower AA-WER scores. For real-time applications, focus on speed factor. For cost-sensitive workloads, compare the price charts.
Speech to Text models & providers compared: Whisper Large v2, Standard, Enhanced, Wizper (L, v3), fal.ai, Incredibly Fast Whisper, Replicate, Nova-2, Base, Whisper (L, v3), Replicate, Whisper (L, v3), fal.ai, Whisper (L, v3, Turbo), Groq, Whisper (L, v3), Fireworks, Whisper (L, v3, Turbo), Fireworks, Universal, Amazon Transcribe, Nova-3, Chirp 2, Chirp, Scribe v1, Gemini 2.0 Flash, Gemini 2.0 Flash Lite, GPT-4o Transcribe, GPT-4o Mini Transcribe, Parakeet RNNT 1.1B, Replicate, Whisper Large v3, together.ai, Voxtral Mini, Voxtral Small, Voxtral Mini, Deepinfra, Parakeet TDT 0.6B V2, Canary Qwen 2.5B, Replicate, Slam-1, Gemini 2.5 Flash Lite, Gemini 2.5 Flash, Gemini 2.5 Pro, Chirp 3, Solaria-1, Scribe v2, Nova 2 Omni, Nova 2 Pro, Gemini 3 Pro, Gemini 3 Flash, Universal-3 Pro, and Pulse STT.