Personalized Model Recommendation
Get personalized recommendations based on your priorities for intelligence, speed, and cost.
Intelligence
Intelligence of leading AI models based on our independent evaluations
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).
Intelligence vs. Cost to Run Artificial Analysis Intelligence Index
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Image & Video Leaderboards
Top models from our Image Arena and Video Arena leaderboards, with 95% confidence intervals
Text to Image Leaderboard
Frontier Language Model Intelligence, Over Time
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Intelligence Evaluations
While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
AA-Omniscience New grader
AA-Omniscience is a knowledge and hallucination benchmark that rewards accuracy, punishes bad guesses and provides a comprehensive view of which models produce factually reliable outputs across different domains
AA-Omniscience Index
AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.
GDPval-AA
GDPval-AA evaluates AI models on real-world, economically valuable tasks across a wide range of occupations
GDPval-AA Leaderboard
Artificial Analysis Openness Index
Artificial Analysis Openness Index assesses how 'open' models are on the basis of their availability and transparency across different components.
Artificial Analysis Openness Index: Components
Artificial Analysis Openness Index vs. Artificial Analysis Intelligence Index
Output Tokens
Output tokens of leading AI models based on our independent evaluations
Output Tokens Used to Run Artificial Analysis Intelligence Index
The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
Cost Efficiency
Cost of leading AI models based on our independent evaluations
Cost to Run Artificial Analysis Intelligence Index
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Speed & Latency
Comparison of first-party API performance
Output Speed
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Price
Price of leading AI models based on our independent evaluations
Pricing: Input and Output Prices
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Comprehensive benchmarking of GPUs for language model inference
Compare leading Text to Video and Image to Video models
Compare leading Image Generation and Image Editing models
Compare leading Text to Speech models
API Provider Performance
gpt-oss-120B (high)
gpt-oss-20B (high)
GPT-5.2 (xhigh)
GPT-5.3 Codex (xhigh)
Llama 4 Maverick
Gemini 3.1 Flash-Lite Preview
Gemini 3.1 Pro Preview
Gemini 3 Flash
Claude Sonnet 4.6 (max)
Claude 4.5 Haiku
Claude Opus 4.6 (max)
Claude Opus 4.6
Mistral Large 3DeepSeek V3.2
Grok 4
Grok 4.1 Fast
Nova 2.0 Pro Preview (medium)
MiniMax-M2.5
NVIDIA Nemotron 3 Nano
Kimi K2.5
K-EXAONEMiMo-V2-Flash (Feb 2026)
KAT-Coder-Pro V1
K2 Think V2
Mi:dm K 2.5 ProGLM-5
Qwen3.5 397B A17B
Output Speed vs. Price: gpt-oss-120B (high)
Smaller, emerging providers are offering high output speed and at competitive prices.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Pricing (Input and Output Prices): gpt-oss-120B (high)
The relative importance of input vs. output token prices varies by use case. E.g. Generation tasks are typically more output token weighted while document processing tasks are more input token weighted.
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Output Speed: gpt-oss-120B (high)
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).