Stay connected with us on X, Discord, and LinkedIn to stay up to date with future analysis

Independent analysis of AI

Understand the AI landscape to choose the best model and provider for your use case

State of AI - 2025 Year End Edition (Highlights)

Personalized Model Recommendation

Get personalized recommendations based on your priorities for intelligence, speed, and cost.

Intelligence

Intelligence of leading AI models based on our independent evaluations

Artificial Analysis Intelligence Index

Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt
Reasoning models are indicated by a lightbulb icon.

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Artificial Analysis Intelligence Index by Open Weights / Proprietary

Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt
Proprietary
Open Weights
Reasoning models are indicated by a lightbulb icon.

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).

Intelligence vs. Cost to Run Artificial Analysis Intelligence Index

Artificial Analysis Intelligence Index; Cost to Run Intelligence Index
Most attractive quadrant
Alibaba
Amazon
Anthropic
DeepSeek
Google
Kimi
KwaiKAT
Meta
MiniMax
Mistral
NVIDIA
OpenAI
xAI
Xiaomi
Z AI
Reasoning models are indicated by a lightbulb icon.

The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Image & Video Leaderboards

Top models from our Image Arena and Video Arena leaderboards, with 95% confidence intervals

Text to Image Leaderboard

ELO scores from blind preference votes in our Image Arena. See the full leaderboard here.

Frontier Language Model Intelligence, Over Time

Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt
Alibaba
Anthropic
DeepSeek
Google
Kimi
Korea Telecom
KwaiKAT
LG AI Research
MBZUAI Institute of Foundation Models
Meta
MiniMax
Mistral
OpenAI
xAI
Xiaomi
Z AI
Reasoning models are indicated by a lightbulb icon.

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Intelligence Evaluations

Intelligence evaluations measured independently by Artificial Analysis; Higher is better
Results claimed by AI Lab (not yet independently verified)
GDPval-AA (Agentic Real-World Work Tasks, (ELO-500)/2000)
Terminal-Bench Hard (Agentic Coding & Terminal Use)
𝜏²-Bench Telecom (Agentic Tool Use)
AA-LCR (Long Context Reasoning)
AA-Omniscience Accuracy (Knowledge)
AA-Omniscience Non-Hallucination Rate (1 - Hallucination Rate)
Humanity's Last Exam (Reasoning & Knowledge)
GPQA Diamond (Scientific Reasoning)
SciCode (Coding)
IFBench (Instruction Following)
CritPt (Physics Reasoning)
MMMU Pro (Visual Reasoning)
Reasoning models are indicated by a lightbulb icon.

While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

AA-Omniscience New grader

AA-Omniscience is a knowledge and hallucination benchmark that rewards accuracy, punishes bad guesses and provides a comprehensive view of which models produce factually reliable outputs across different domains

AA-Omniscience Index

AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.
Reasoning models are indicated by a lightbulb icon.

AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.

GDPval-AA

GDPval-AA evaluates AI models on real-world, economically valuable tasks across a wide range of occupations

GDPval-AA Leaderboard

ELO scores for agentic performance on real-world work tasks using web and shell access via Stirrup, an open-source harness developed by Artificial Analysis

Artificial Analysis Openness Index

Artificial Analysis Openness Index assesses how 'open' models are on the basis of their availability and transparency across different components.

Artificial Analysis Openness Index: Components

Openness Index underlying score contribution by components, up to a maximum of 18 (higher is more open)
Model Availability
Transparency - Methodology
Transparency - Post-training Data
Transparency - Pre-training Data

Artificial Analysis Openness Index vs. Artificial Analysis Intelligence Index

Artificial Analysis Openness Index; Artificial Analysis Intelligence Index
Most attractive quadrant
Alibaba
Anthropic
Kimi
LG AI Research
MBZUAI Institute of Foundation Models
Meta
MiniMax
Mistral
NVIDIA
OpenAI
xAI
Z AI

Output Tokens

Output tokens of leading AI models based on our independent evaluations

Output Tokens Used to Run Artificial Analysis Intelligence Index

Tokens used to run all evaluations in the Artificial Analysis Intelligence Index
Reasoning Tokens
Answer Tokens
Reasoning models are indicated by a lightbulb icon.

The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).

Cost Efficiency

Cost of leading AI models based on our independent evaluations

Cost to Run Artificial Analysis Intelligence Index

Cost (USD) to run all evaluations in the Artificial Analysis Intelligence Index
Input Cost
Reasoning Cost
Output Cost
Reasoning models are indicated by a lightbulb icon.

The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).

Speed & Latency

Comparison of first-party API performance

Output Speed

Output Tokens per Second; Higher is better
Reasoning models are indicated by a lightbulb icon.

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Price

Price of leading AI models based on our independent evaluations

Pricing: Input and Output Prices

Price: USD per 1M Tokens
Input price
Output price
Reasoning models are indicated by a lightbulb icon.

Price per token included in the request/message sent to the API, represented as USD per million Tokens.

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

API Provider Performance

Output Speed vs. Price: gpt-oss-120B (high)

Output Speed: Output Tokens per Second, Price: USD per 1M Tokens; 10,000 Input Tokens
Most attractive quadrant
Amazon Bedrock
Baseten
Cerebras
Clarifai
Cloudflare
Databricks
DeepInfra
DeepInfra (Turbo)
Eigen AI
Fireworks
Google Vertex
Groq
Hyperbolic
Lightning AI
Microsoft Azure
Nebius Base
Novita
Parasail
SambaNova
Scaleway
Snowflake
Together.ai
Weights & Biases
Reasoning models are indicated by a lightbulb icon.

Smaller, emerging providers are offering high output speed and at competitive prices.

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

Pricing (Input and Output Prices): gpt-oss-120B (high)

Price: USD per 1M Tokens; Lower is better; 10,000 Input Tokens
Input price
Output price
Reasoning models are indicated by a lightbulb icon.

The relative importance of input vs. output token prices varies by use case. E.g. Generation tasks are typically more output token weighted while document processing tasks are more input token weighted.

Price per token included in the request/message sent to the API, represented as USD per million Tokens.

Price per token generated by the model (received from the API), represented as USD per million Tokens.

Output Speed: gpt-oss-120B (high)

Output Speed: Output Tokens per Second; 10,000 Input Tokens
Reasoning models are indicated by a lightbulb icon.

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).