Menu

logo
Artificial Analysis
HOME

Text to Speech Benchmarking Methodology

Background and Scope

Artificial Analysis performs benchmarking on Text to Speech models delivered via serverless API endpoints. This page describes our Text to Speech benchmarking methodology, including both our quality benchmarking and performance benchmarking. We consider Text to Speech endpoints to be serverless when customers only pay for usage, not a fixed rate for access.

For both our performance benchmarking and within the Speech Arena, our focus is reflecting the end-user experience of users using the serverless APIs. We focus on benchmarking the time to receive the audio file locally. Where the API response is a URL rather than bytes, we include the time of downloading the file in our response time measurement. Our approach is to use the standard implementation of provider APIs as suggested by each provider's documentation. Where on option on the provider's API, we standardize the sample rate of audio to 22.05 kHz.

Key Metrics

We use the following metrics to track quality, performance and price for Text to Speech models.

Quality ELO: Relative ELO score of the models as determined by responses from users in the Artificial Analysis Speech Arena.

Some models may not be shown due to not yet having enough votes. We use a similar Linear Regression model, similar to how LMSys calculates ELO scores for Chatbot Arena.

Price per 1M Characters: Provider's price (USD) per 1M characters of text.

For providers which do not provide a price per 1M characters, we have estimated pricing based on the following alternative methodologies.

For providers which charge based on inference time, we have estimated pricing based on their inference time using a dataset of ~25 texts of ~500 characters. This methodology has been applied for Replicate & fal.ai.

For providers which only offer subscription plans, we select the plan priced closest to $300 per month, which is usually representative of 'Scaled' used of the API, and we assume 80% utilization of the characters offered by that plan. For example, if a $300 plan includes 1 million characters we assume 800,000 characters are used and the price per 1M characters is (300/(1*80%) = $375 per 1M characters). This methodology has been applied for ElevenLabs, Cartesia and LMNT.

Generation Time: Median time the provider takes to generate a single audio clip with ~500 input characters, calculated over the past 14 days of measurements.

Generation Time includes downloading the audio clip from the provider where a URL is provided rather than an audio response. This is to reflect the end-user latency of receiving a generated audio clip and as URLs can be generated prior to audio completion. Audio clips are generated at batch size of 1 where relevant.

Benchmarking is conducted 4 times daily at random times each day. For each benchmarking evlauation we select a single random voice for each model. A unique prompt of ~500 characters is used for each generation.

Model Voices

For each model tested, we test multiple voices to ensure that our comparison between models is representative and fair. Voice characteristics such as accent, gender and style are typically aspects of the voices that each model can generate speech for, not the underlying model. For each model we select 2 voices of each combination of Male and Female, and US and UK accents (8 combinations in total). Where a gender and accent is not available, we exclude this combination from evaluation in the Speech Arena.

Voices are selected for each model based on their prominence in provider interface and documentation, excluding voices which are not neutral in nature (e.g. Los Angeles valley and deep southern accents are excluded). Creators of the models may also request we use specific voices where many are available. Where voices are not provided, as is typically the case for open source models, we use voices clips from professional voice actors as source files for generating speech. All voice clips have been licensed for commercial use.

Below, we list the voices used for each model.

Model Name
Voices Used (Gender, Accent)
Standard, OpenAI TTS
echo (M, US), fable (M, UK), onyx (M, US), shimmer (F, US), nova (F, US), alloy (F, US)
HD, OpenAI TTS
nova (F, US), alloy (F, US), onyx (M, US), fable (M, UK), echo (M, US), shimmer (F, US)
Studio, Google Cloud TTS
en-US-Studio-Q (M, US), en-US-Studio-O (F, US), en-GB-Studio-B (M, UK), en-GB-Studio-C (F, UK)
Journey, Google Cloud TTS
en-US-Journey-D (M, US), en-US-Journey-F (F, US)
Neural2, Google Cloud TTS
en-US-Neural2-I (M, US), en-US-Neural2-A (M, US), en-US-Neural2-H (F, US), en-US-Neural2-C (F, US), en-GB-Neural2-B (M, UK), en-GB-Neural2-D (M, UK), en-GB-Neural2-C (F, UK), en-GB-Neural2-A (F, UK)
WaveNet, Google Cloud TTS
en-US-Wavenet-I (M, US), en-US-Wavenet-B (M, US), en-US-Wavenet-C (F, US), en-US-Wavenet-F (F, US), en-GB-Wavenet-B (M, UK), en-GB-Wavenet-D (M, UK), en-GB-Wavenet-C (F, UK), en-GB-Wavenet-A (F, UK)
Standard, Google Cloud TTS
en-US-Standard-I (M, US), en-US-Standard-A (M, US), en-US-Standard-C (F, US), en-US-Standard-F (F, US), en-GB-Standard-B (M, UK), en-GB-Standard-D (M, UK), en-GB-Standard-C (F, UK), en-GB-Standard-A (F, UK)
Long-form, Amazon Polly
Gregory (M, US), Danielle (F, US), Ruth (F, US)
Neural, Amazon Polly
Joey (M, US), Gregory (M, US), Joanna (F, US), Danielle (F, US), Brian (M, UK), Brian (M, UK), Amy (F, UK), Emma (F, UK)
Standard, Amazon Polly
Joey (M, US), Joanna (F, US), Brian (M, UK), Amy (F, UK)
Neural, Microsoft Azure
Andrew Multilingual (M, US), Brian Multilingual (M, US), Ava Multilingual (F, US), Emma Multilingual (F, US), Ryan (M, UK), Alfie (M, UK), Libby (F, UK), Sonia (F, UK)
MetaVoice v1
Susan (F, US)🔗, Redd (F, UK)🔗, Abbey (F, US)🔗, Alan (M, US)🔗, Michael (M, US)🔗, Amy (F, UK)🔗, Tom (M, UK)🔗, Dave (M, UK)🔗, Aurora (F, UK)
XTTS v2
Susan (F, US)🔗, Redd (F, UK)🔗, Abbey (F, US)🔗, Alan (M, US)🔗, Michael (M, US)🔗, Amy (F, UK)🔗, Tom (M, UK)🔗, Dave (M, UK)🔗, Aurora (F, UK)
StyleTTS 2
Susan (F, US)🔗, Redd (F, UK)🔗, Abbey (F, US)🔗, Alan (M, US)🔗, Michael (M, US)🔗, Amy (F, UK)🔗, Tom (M, UK)🔗, Dave (M, UK)🔗, Aurora (F, UK)
OpenVoice v2
Susan (F, US)🔗, Redd (F, UK)🔗, Abbey (F, US)🔗, Alan (M, US)🔗, Michael (M, US)🔗, Amy (F, UK)🔗, Tom (M, UK)🔗, Dave (M, UK)🔗, Aurora (F, UK)
Sonic English (Oct '24), Cartesia
Nonfiction Man (M, US), Newsman (M, US), Classy British Man (M, UK), Polite Man (M, UK), Helpful Woman (F, US), Southern Woman (F, US), British Narration Lady (F, UK), British Lady (F, UK)
Turbo v2.5, ElevenLabs
Liam (M, US), Eric (M, US), River (F, US), Jessica (F, US), Daniel (M, UK), George (M, UK), Alice (F, UK), Lily (F, UK)
Multilingual v2, ElevenLabs
Liam (M, US), Eric (M, US), River (F, US), Jessica (F, US), Daniel (M, UK), George (M, UK), Alice (F, UK), Lily (F, UK)
LMNT
daniel (M, US), terrence (M, US), lily (F, US), chloe (F, US), morgan (F, UK)

Model and Provider Inclusion Criteria

Our objective is to analyze and compare popular and high-performing Text to Speech models and providers to support end-users in choosing which to use. As such, we apply an 'industry significance' and competitive performance test to evaluate the inclusion of new models and providers. We are in the process of refining these criteria and welcome any feedback and suggestions. To suggest models or providers, please contact us via the contact page.

Statement of Independence

Benchmarking is conducted with strict independence and objectivity. No compensation is received from any providers for listing or favorable outcomes on Artificial Analysis.