Text to Speech Benchmarking Methodology
Background and Scope
Artificial Analysis performs benchmarking on Text to Speech models delivered via serverless API endpoints. This page describes our Text to Speech benchmarking methodology, including both our quality benchmarking and performance benchmarking. We consider Text to Speech endpoints to be serverless when customers only pay for usage, not a fixed rate for access.
For both our performance benchmarking and within the Speech Arena, our focus is reflecting the end-user experience of users using the serverless APIs. We focus on benchmarking the time to receive the audio file locally. Where the API response is a URL rather than bytes, we include the time of downloading the file in our response time measurement. Our approach is to use the standard implementation of provider APIs as suggested by each provider's documentation. Where on option on the provider's API, we standardize the sample rate of audio to 22.05 kHz.
Key Metrics
We use the following metrics to track quality, performance and price for Text to Speech models.
Quality ELO: Relative ELO score of the models as determined by responses from users in the Artificial Analysis Speech Arena.
Some models may not be shown due to not yet having enough votes. We use a similar Linear Regression model, similar to how LMSys calculates ELO scores for Chatbot Arena.
Price per 1M Characters: Provider's price (USD) per 1M characters of text.
For providers which do not provide a price per 1M characters, we have estimated pricing based on the following alternative methodologies.
For providers which charge based on inference time, we have estimated pricing based on their inference time using a dataset of ~25 texts of ~500 characters. This methodology has been applied for Replicate & fal.ai.For providers which only offer subscription plans, we select the plan priced closest to $300 per month, which is usually representative of 'Scaled' used of the API, and we assume 80% utilization of the characters offered by that plan. For example, if a $300 plan includes 1 million characters we assume 800,000 characters are used and the price per 1M characters is (300/(1*80%) = $375 per 1M characters). This methodology has been applied for ElevenLabs, Cartesia and LMNT.
Generation Time: Median time the provider takes to generate a single audio clip with ~500 input characters, calculated over the past 14 days of measurements.
Generation Time includes downloading the audio clip from the provider where a URL is provided rather than an audio response. This is to reflect the end-user latency of receiving a generated audio clip and as URLs can be generated prior to audio completion. Audio clips are generated at batch size of 1 where relevant.
Benchmarking is conducted 4 times daily at random times each day. For each benchmarking evlauation we select a single random voice for each model. A unique prompt of ~500 characters is used for each generation.Model Voices
For each model tested, we test multiple voices to ensure that our comparison between models is representative and fair. Voice characteristics such as accent, gender and style are typically aspects of the voices that each model can generate speech for, not the underlying model. For each model we select 2 voices of each combination of Male and Female, and US and UK accents (8 combinations in total). Where a gender and accent is not available, we exclude this combination from evaluation in the Speech Arena.
Voices are selected for each model based on their prominence in provider interface and documentation, excluding voices which are not neutral in nature (e.g. Los Angeles valley and deep southern accents are excluded). Creators of the models may also request we use specific voices where many are available. Where voices are not provided, as is typically the case for open source models, we use voices clips from professional voice actors as source files for generating speech. All voice clips have been licensed for commercial use.
Below, we list the voices used for each model.
Model and Provider Inclusion Criteria
Our objective is to analyze and compare popular and high-performing Text to Speech models and providers to support end-users in choosing which to use. As such, we apply an 'industry significance' and competitive performance test to evaluate the inclusion of new models and providers. We are in the process of refining these criteria and welcome any feedback and suggestions. To suggest models or providers, please contact us via the contact page.
Statement of Independence
Benchmarking is conducted with strict independence and objectivity. No compensation is received from any providers for listing or favorable outcomes on Artificial Analysis.