Stay connected with us on X, Discord, and LinkedIn to stay up to date with future analysis
logo

AssemblyAI: API Provider Benchmarking & Analysis

Analysis of AssemblyAI API providers across performance metrics including Artificial Analysis Word Error Rate Index, speed, and price.
Creator:
AssemblyAI
License:
Proprietary
Link:

Highlights

Word Error Rate Index
% of words transcribed incorrectly; Lower is better
Speed Factor
Input audio seconds transcribed per second; Higher is better
Price
USD per 1000 minutes of audio; Lower is better

Artificial Analysis Word Error Rate (AA-WER) Index by API

Artificial Analysis Word Error Rate (AA-WER) Index by API

% of words transcribed incorrectly, Lower is better
Note: Models that do not support transcription of audio longer than 10 minutes were evaluated on 9-minute chunks of the test set (applies to GPT-4o Transcribe; GPT-4o Mini Transcribe; Voxtral Mini; Voxtral Mini, Deepinfra; Gemini 2.5 Flash Lite). For models with even shorter time limits, all files are split into 30-second chunks (applies to ).

Measures transcription accuracy across 3 datasets to evaluate models in real-world speech with diverse accents, domain-specific language, and challenging channel & acoustic conditions.

AA-WER is calculated as an audio-duration-weighted average of WER across ~2 hours from three datasets: VoxPopuli, Earnings-22, and AMI-SDM. See methodology for more detail.

Artificial Analysis Word Error Rate (AA-WER) Index vs Other Metrics

Artificial Analysis Word Error Rate Index vs. Price

% of words transcribed incorrectly, USD per 1000 minutes of audio
Most attractive quadrant
Size represents Input audio seconds transcribed per second
Amazon Transcribe
Canary Qwen 2.5B, Replicate
Chirp 2, Google
GPT-4o Transcribe
Nova 2 Omni
Nova-3
Parakeet TDT 0.6B V3, Hathora
Scribe v2
Scribe, ElevenLabs
Slam-1
Universal, AssemblyAI
Voxtral Small
Whisper (L, v3), Fireworks
Whisper (L, v3), Groq

Measures transcription accuracy across 3 datasets to evaluate models in real-world speech with diverse accents, domain-specific language, and challenging channel & acoustic conditions.

AA-WER is calculated as an audio-duration-weighted average of WER across ~2 hours from three datasets: VoxPopuli, Earnings-22, and AMI-SDM. See methodology for more detail.

Cost in USD per 1000 minutes of audio transcribed. Reflects the pricing model of the transcription service or software.

Speed Factor

Speed Factor

Input audio seconds transcribed per second, Higher is better

Audio file seconds transcribed per second of processing time. Higher factor indicates faster transcription speed.

Artificial Analysis measurements are based on a audio duration of 10 minutes. Speed Factor may vary for other durations, particularly for very short durations (under 1 minute).

Price of Transcription

USD per 1000 minutes of audio, Lower is better

Cost in USD per 1000 minutes of audio transcribed. Reflects the pricing model of the transcription service or software.

For providers which do not price based on audio duration and rather on processing time (incl. Replicate, fal), we have calculated an indicative per minute price based on processing time expected per minute of audio.Further detail present on methodology page.

Note: Groq chargers for a minimum of 10s per request.

Summary of Key Metrics & Further Information
ProviderFurther
Details
Whisper Large v2 logoOpenAI
Whisper Large v2 logoMicrosoft Azure
Wizper Large v3 logofal.ai
Incredibly Fast Whisper logoReplicate
Whisper Large v2 logoReplicate
Whisper Large v3 logoReplicate
WhisperX logoReplicate
Whisper Large v3 logoGroq
Whisper Large v3 logoDeepInfra
Whisper Large v3 logofal.ai
Whisper Large v3 Turbo logoGroq
Whisper Large v3 logoFireworks
Whisper Large v3 Turbo logoFireworks
Whisper-Large-v3 logoSambaNova
Whisper Large v3 logoTogether.ai
Speechmatics Standard logoSpeechmatics
Speechmatics Enhanced logoSpeechmatics
Azure Realtime Speech to Text logoMicrosoft Azure
Nova-2 logoDeepgram
Base logoDeepgram
Nova-3 logoDeepgram
Universal, AssemblyAI logoAssemblyAI
Slam-1 logoAssemblyAI
Amazon Transcribe logoAmazon Bedrock
Chirp logoGoogle
Chirp 2, Google logoGoogle
Chirp 3, Google logoGoogle
Scribe, ElevenLabs logoElevenLabs
Scribe v2 logoElevenLabs
Gemini 2.0 Flash logoGoogle
Gemini 2.0 Flash Lite logoGoogle
Gemini 2.5 Flash Lite logoGoogle
Gemini 2.5 Flash logoGoogle
Gemini 2.5 Pro logoGoogle
GPT-4o Transcribe logoOpenAI
GPT-4o Mini Transcribe logoOpenAI
Parakeet RNNT 1.1B logoReplicate
Parakeet TDT 0.6B V2, NVIDIA logoNVIDIA
Canary Qwen 2.5B, NVIDIA logoReplicate
Parakeet TDT 0.6B V3, Hathora logoHathora
Voxtral Mini logoMistral
Voxtral Small logoMistral
Voxtral Small logoDeepInfra
Voxtral Mini logoDeepInfra
Solaria-1, Gladia logoGladia
Nova 2 Omni logoAmazon Bedrock
Nova 2 Pro logoAmazon Bedrock

Speech to Text providers compared: OpenAI, Speechmatics, Microsoft Azure, fal.ai, Replicate, Deepgram, Groq, DeepInfra, Fireworks, AssemblyAI, Amazon Bedrock, Google, ElevenLabs, SambaNova, Together.ai, Mistral, NVIDIA, Gladia, and Hathora.