Highlights
Personalized Model Recommendation
Get personalized recommendations based on your priorities for intelligence, speed, and cost.
How do the latest LLMs stack up?
How do the latest OpenAI models compare?
Which model has the highest hallucination rate?
Intelligence
Intelligence of leading AI models based on our independent evaluations
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":"modelName,intelligenceIndex,detailsUrl,isLabClaimedValue\nGPT-5.2 (xhigh),50.61,/models/gpt-5-2/providers,false\nClaude Opus 4.5,49.11,/models/claude-opus-4-5-thinking/providers,false\nGemini 3 Pro Preview (high),47.94,/models/gemini-3-pro/providers,false\nGPT-5.1 (high),47.03,/models/gpt-5-1/providers,false\nGemini 3 Flash,45.83,/models/gemini-3-flash-reasoning/providers,false\nGPT-5.2 (medium),45.36,/models/gpt-5-2-medium/providers,false\nGPT-5 (high),44.12,/models/gpt-5/providers,false\nGPT-5 Codex (high),44,/models/gpt-5-codex/providers,false\nClaude Opus 4.5,42.52,/models/claude-opus-4-5/providers,false\nClaude 4.5 Sonnet,42.44,/models/claude-4-5-sonnet-thinking/providers,false\nGPT-5 (medium),41.89,/models/gpt-5-medium/providers,false\nGLM-4.7,41.68,/models/glm-4-7/providers,false\nGPT-5.1 Codex (high),41.58,/models/gpt-5-1-codex/providers,false\nGrok 4,41.35,/models/grok-4/providers,false\nDeepSeek V3.2,41.23,/models/deepseek-v3-2-reasoning/providers,false\no3,40.906130954582004,/models/o3/providers,true\no3-pro,40.68990197347841,/models/o3-pro/providers,true\nGPT-5 mini (high),40.6,/models/gpt-5-mini/providers,false\nGemini 3 Pro Preview (low),40.56,/models/gemini-3-pro-low/providers,false\nKimi K2 Thinking,40.3,/models/kimi-k2-thinking/providers,false\nMiniMax-M2.1,39.36,/models/minimax-m2-1/providers,false\nMiMo-V2-Flash,39.06,/models/mimo-v2-flash-reasoning/providers,false\nGPT-5 (low),38.65,/models/gpt-5-low/providers,false\nGPT-5 mini (medium),38.64,/models/gpt-5-mini-medium/providers,false\nClaude 4 Sonnet,38.31,/models/claude-4-sonnet-thinking/providers,false\nGrok 4.1 Fast,38.17,/models/grok-4-1-fast-reasoning/providers,false\nGPT-5.1 Codex mini (high),38.01,/models/gpt-5-1-codex-mini/providers,false\nClaude 4.5 Haiku,36.63,/models/claude-4-5-haiku-reasoning/providers,false\nClaude 4.5 Sonnet,36.59,/models/claude-4-5-sonnet/providers,false\nKAT-Coder-Pro V1,36.08,/models/kat-coder-pro-v1/providers,false\nMiniMax-M2,35.69,/models/minimax-m2/providers,false\nNova 2.0 Pro Preview (medium),35.31,/models/nova-2-0-pro-reasoning-medium/providers,false\nGemini 3 Flash,34.8,/models/gemini-3-flash/providers,false\nGrok 4 Fast,34.63,/models/grok-4-fast-reasoning/providers,false\nClaude 3.7 Sonnet,34.38,/models/claude-3-7-sonnet-thinking/providers,false\nGemini 2.5 Pro,34.12,/models/gemini-2-5-pro/providers,false\nGLM-4.7,33.76,/models/glm-4-7-non-reasoning/providers,false\nDeepSeek V3.1 Terminus,33.46,/models/deepseek-v3-1-terminus-reasoning/providers,false\nDoubao Seed Code,33.22,/models/doubao-seed-code/providers,false\nGPT-5.2,33.08,/models/gpt-5-2-non-reasoning/providers,false\no4-mini (high),32.92,/models/o4-mini/providers,false\ngpt-oss-120B (high),32.91,/models/gpt-oss-120b/providers,false\nClaude 4 Sonnet,32.66,/models/claude-4-sonnet/providers,false\nDeepSeek V3.2 Exp,32.48,/models/deepseek-v3-2-reasoning-0925/providers,false\nGrok 3 mini Reasoning (high),32.43,/models/grok-3-mini-reasoning/providers,false\nQwen3 Max Thinking,32.32,/models/qwen3-max-thinking/providers,false\nGLM-4.6,32.24,/models/glm-4-6-reasoning/providers,false\nNova 2.0 Pro Preview (low),32.2,/models/nova-2-0-pro-reasoning-low/providers,false\nK-EXAONE,31.89,/models/k-exaone/providers,false\nClaude 4.1 Opus,31.888270712229925,/models/claude-4-1-opus-thinking/providers,true\nDeepSeek V3.2,31.87,/models/deepseek-v3-2/providers,false\nQwen3 Max,31.01,/models/qwen3-max/providers,false\nGemini 2.5 Flash (Sep),30.78,/models/gemini-2-5-flash-preview-09-2025-reasoning/providers,false\nClaude 3.7 Sonnet,30.63,/models/claude-3-7-sonnet/providers,false\nClaude 4.5 Haiku,30.46,/models/claude-4-5-haiku/providers,false\nGemini 2.5 Pro (Mar),30.295702949046902,/models/gemini-2-5-pro-03-25/providers,true\nMiMo-V2-Flash,30.13,/models/mimo-v2-flash/providers,false\nNova 2.0 Lite (medium),29.94,/models/nova-2-0-lite-reasoning-medium/providers,false\nGLM-4.6,29.79,/models/glm-4-6/providers,false\nGemini 2.5 Pro (May),29.54758250414228,/models/gemini-2-5-pro-05-06/providers,true\nQwen3 235B A22B 2507,29.37,/models/qwen3-235b-a22b-instruct-2507-reasoning/providers,false\nDeepSeek V3.2 Speciale,28.99,/models/deepseek-v3-2-speciale/providers,false\nERNIE 5.0 Thinking Preview,28.91,/models/ernie-5-0-thinking-preview/providers,false\nQwen3 VL 32B,28.573221067997675,/models/qwen3-vl-32b-reasoning/providers,true\nApriel-v1.5-15B-Thinker,28.33194461137262,/models/apriel-v1-5-15b-thinker/providers,true\nKimi K2 0905,28.02,/models/kimi-k2-0905/providers,false\nDeepSeek V3.2 Exp,28,/models/deepseek-v3-2-0925/providers,false\nDeepSeek V3.1 Terminus,27.96,/models/deepseek-v3-1-terminus/providers,false\nDeepSeek V3.1,27.92,/models/deepseek-v3-1-reasoning/providers,false\nNova 2.0 Omni (medium),27.85,/models/nova-2-0-omni-reasoning-medium/providers,false\nApriel-v1.6-15B-Thinker,27.83,/models/apriel-v1-6-15b-thinker/providers,false\nDeepSeek V3.1,27.67,/models/deepseek-v3-1/providers,false\nQwen3 VL 235B A22B,27.53,/models/qwen3-vl-235b-a22b-reasoning/providers,false\nMagistral Medium 1.2,27.49,/models/magistral-medium-2509/providers,false\nClaude 4 Opus,27.361794173399694,/models/claude-4-opus-thinking/providers,true\nGemini 2.5 Flash,27.23,/models/gemini-2-5-flash-reasoning/providers,false\nGPT-5.1,27.17,/models/gpt-5-1-non-reasoning/providers,false\nDeepSeek R1 0528,27.05,/models/deepseek-r1/providers,false\nGLM-4.5,26.84,/models/glm-4.5/providers,false\nGPT-5 nano (high),26.73,/models/gpt-5-nano/providers,false\nQwen3 Next 80B A3B,26.54,/models/qwen3-next-80b-a3b-reasoning/providers,false\nGrok Code Fast 1,26.20839955014027,/models/grok-code-fast-1/providers,true\nQwen3 Max (Preview),26.139039234341066,/models/qwen3-max-preview/providers,true\nKimi K2,25.863428734733116,/models/kimi-k2/providers,true\no3-mini,25.863248068693565,/models/o3-mini/providers,true\nGPT-5 nano (medium),25.81,/models/gpt-5-nano-medium/providers,false\nGPT-4.1,25.8,/models/gpt-4-1/providers,false\no1-pro,25.760825664214558,/models/o1-pro/providers,true\nGemini 2.5 Flash (Sep),25.57,/models/gemini-2-5-flash-preview-09-2025/providers,false\nGrok 3,25.24,/models/grok-3/providers,false\no1,25.21130772179484,/models/o1/providers,true\no3-mini (high),25.07,/models/o3-mini-high/providers,false\nSeed-OSS-36B-Instruct,25,/models/seed-oss-36b-instruct/providers,false\nNova 2.0 Lite (low),24.96,/models/nova-2-0-lite-reasoning-low/providers,false\nQwen3 Coder 480B,24.81,/models/qwen3-coder-480b-a35b-instruct/providers,false\nNVIDIA Nemotron 3 Nano,24.72,/models/nvidia-nemotron-3-nano-30b-a3b-reasoning/providers,false\ngpt-oss-20B (high),24.68,/models/gpt-oss-20b/providers,false\nSonar Reasoning Pro,24.62112108368442,/models/sonar-reasoning-pro/providers,true\nQwen3 235B 2507,24.55,/models/qwen3-235b-a22b-instruct-2507/providers,false\nGPT-5 (minimal),24.52,/models/gpt-5-minimal/providers,false\nMiniMax M1 80k,24.33,/models/minimax-m1-80k/providers,false\nGemini 2.5 Flash,24.29283568571511,/models/gemini-2-5-flash-reasoning-04-2025/providers,true\nNova 2.0 Omni (low),24.08,/models/nova-2-0-omni-reasoning-low/providers,false\nHyperCLOVA X SEED Think (32B),24.01,/models/hyperclova-x-seed-think-32b/providers,false\nMotif-2-12.7B,23.855265124432417,/models/motif-2-12-7b/providers,true\nGLM-4.6V,23.81,/models/glm-4-6v-reasoning/providers,false\ngpt-oss-120B (low),23.77,/models/gpt-oss-120b-low/providers,false\no1-preview,23.742890445410186,/models/o1-preview/providers,true\nGLM-4.5-Air,23.62,/models/glm-4-5-air/providers,false\nGrok 4.1 Fast,23.6,/models/grok-4-1-fast/providers,false\nClaude 4.1 Opus,23.565610165346172,/models/claude-4-1-opus/providers,true\nNova 2.0 Pro Preview,23.48,/models/nova-2-0-pro/providers,false\nMi:dm K 2.5 Pro,23.21,/models/mi-dm-k-2-5-pro-dec28/providers,false\nK-EXAONE,23.19,/models/k-exaone-non-reasoning/providers,false\nGPT-4.1 mini,22.94,/models/gpt-4-1-mini/providers,false\nQwen3 30B A3B 2507,22.79,/models/qwen3-30b-a3b-2507-reasoning/providers,false\nGrok 4 Fast,22.77,/models/grok-4-fast/providers,false\nRing-1T,22.66,/models/ring-1t/providers,false\nDeepSeek V3 0324,22.61,/models/deepseek-v3-0324/providers,false\nMagistral Small 1.2,22.54605154658181,/models/magistral-small-2509/providers,true\nMistral Large 3,22.54,/models/mistral-large-3/providers,false\nGemini 2.5 Flash-Lite (Sep),22.34,/models/gemini-2-5-flash-lite-preview-09-2025-reasoning/providers,false\nINTELLECT-3,22.2,/models/intellect-3/providers,false\nClaude 4 Opus,22.171205776538883,/models/claude-4-opus/providers,true\nGPT-5 (ChatGPT),21.834580028932926,/models/gpt-5-chatgpt/providers,true\nHermes 4 405B,21.724919221240167,/models/hermes-4-llama-3-1-405b-reasoning/providers,true\nDevstral 2,21.71,/models/devstral-2/providers,false\nGrok 3 Reasoning Beta,21.647717992955027,/models/grok-3-reasoning/providers,true\nGPT-5 mini (minimal),21.49,/models/gpt-5-mini-minimal/providers,false\ngpt-oss-20B (low),21.17,/models/gpt-oss-20b-low/providers,false\nMistral Medium 3.1,21.1,/models/mistral-medium-3-1/providers,false\nK2-V2 (high),21.05,/models/k2-v2/providers,false\nGemini 2.5 Flash,20.89,/models/gemini-2-5-flash/providers,false\nMiniMax M1 40k,20.856172610675117,/models/minimax-m1-40k/providers,true\nQwen3 VL 235B A22B,20.84,/models/qwen3-vl-235b-a22b-instruct/providers,false\nRing-flash-2.0,20.581651045250553,/models/ring-flash-2-0/providers,true\nHermes 4 70B,20.386235707526104,/models/hermes-4-llama-3-1-70b-reasoning/providers,true\nQwen3 Next 80B A3B,20.26,/models/qwen3-next-80b-a3b-instruct/providers,false\nQwen3 Coder 30B A3B,20.16,/models/qwen3-coder-30b-a3b-instruct/providers,false\nGemini 2.5 Flash-Lite (Sep),20.06,/models/gemini-2-5-flash-lite-preview-09-2025/providers,false\nLlama Nemotron Ultra,20.02216586108181,/models/llama-3-1-nemotron-ultra-253b-v1-reasoning/providers,true\nGPT-4.5 (Preview),19.956828896300806,/models/gpt-4-5/providers,true\nQwen3 235B,19.94,/models/qwen3-235b-a22b-instruct-reasoning/providers,false\nLing-flash-2.0,19.916329097188502,/models/ling-flash-2-0/providers,true\no1-mini,19.9,/models/o1-mini/providers,true\nQwQ-32B,19.724216680037447,/models/qwq-32b/providers,true\nQwen3 VL 30B A3B,19.72,/models/qwen3-vl-30b-a3b-reasoning/providers,false\nLing-1T,19.72,/models/ling-1t/providers,false\nGemini 2.0 Flash Thinking exp. (Jan),19.60282838932838,/models/gemini-2-0-flash-thinking-exp-0121/providers,true\nLlama Nemotron Super 49B v1.5,19.39,/models/llama-nemotron-super-49b-v1-5-reasoning/providers,false\nGLM-4.5V,19.267407790488182,/models/glm-4-5v-reasoning/providers,true\nNova Premier,19.23,/models/nova-premier/providers,false\nK2-V2 (medium),19.16,/models/k2-v2-medium/providers,false\nDevstral Small 2,19.08,/models/devstral-small-2/providers,false\nMagistral Medium 1,19.03,/models/magistral-medium/providers,false\nGPT-4o (Aug),18.93,/models/gpt-4o-2024-08-06/providers,false\nGemini 2.0 Flash,18.93,/models/gemini-2-0-flash/providers,false\nLlama 4 Maverick,18.89,/models/llama-4-maverick/providers,false\nOlmo 3 32B Think,18.88824843779319,/models/olmo-3-32b-think/providers,true\nNova 2.0 Lite,18.82,/models/nova-2-0-lite/providers,false\nSolar Pro 2,18.80930200641581,/models/solar-pro-2-preview-reasoning/providers,true\nDevstral Medium,18.8,/models/devstral-medium/providers,false\nQwen3 4B 2507,18.76,/models/qwen3-4b-2507-instruct-reasoning/providers,false\nClaude 3.5 Haiku,18.75,/models/claude-3-5-haiku/providers,false\nDeepSeek R1 (Jan),18.73,/models/deepseek-r1-0120/providers,false\nGPT-4o (Mar),18.558508422577216,/models/gpt-4o-chatgpt-03-25/providers,true\nLlama 3.3 Nemotron Super 49B,18.49174710104611,/models/llama-3-3-nemotron-super-49b-reasoning/providers,true\nGemini 2.0 Pro Experimental,18.052648552336592,/models/gemini-2-0-pro-experimental-02-05/providers,true\nGemini 2.5 Flash-Lite,17.93,/models/gemini-2-5-flash-lite-reasoning/providers,false\nSonar Reasoning,17.87769906391809,/models/sonar-reasoning/providers,true\nGemini 2.5 Flash,17.844790713497186,/models/gemini-2-5-flash-04-2025/providers,true\nMistral Medium 3,17.611990577780258,/models/mistral-medium-3/providers,true\nGLM-4.6V,17.41,/models/glm-4-6v/providers,false\nQwen3 235B,17.27,/models/qwen3-235b-a22b-instruct/providers,false\nERNIE 4.5 300B A47B,17.256259598074,/models/ernie-4-5-300b-a47b/providers,true\nNova 2.0 Omni,17.18,/models/nova-2-0-omni/providers,false\nDeepSeek R1 Distill Qwen 32B,17.166714190986596,/models/deepseek-r1-distill-qwen-32b/providers,true\nHermes 4 405B,17.124153510875814,/models/hermes-4-llama-3-1-405b/providers,true\nQwen3 VL 32B,17.09,/models/qwen3-vl-32b-instruct/providers,false\nQwen3 32B,17.08,/models/qwen3-32b-instruct-reasoning/providers,false\nQwen3 VL 8B,17.04,/models/qwen3-vl-8b-reasoning/providers,false\nDeepSeek V3 (Dec),16.99,/models/deepseek-v3/providers,false\nEXAONE 4.0 32B,16.95,/models/exaone-4-0-32b-reasoning/providers,false\nOlmo 3 7B Think,16.796070376777948,/models/olmo-3-7b-think/providers,true\nMagistral Small 1,16.791620007828033,/models/magistral-small/providers,true\nQwen3 14B,16.79,/models/qwen3-14b-instruct-reasoning/providers,false\nGemini 2.0 Flash (exp),16.7744977006624,/models/gemini-2-0-flash-experimental/providers,true\nDeepSeek R1 0528 Qwen3 8B,16.43067398648048,/models/deepseek-r1-qwen3-8b/providers,true\nQwen3 VL 30B A3B,16.31,/models/qwen3-vl-30b-a3b-instruct/providers,false\nQwen2.5 Max,16.282944203161424,/models/qwen-2-5-max/providers,true\nMinistral 3 14B,16.25,/models/ministral-3-14b/providers,false\nFalcon-H1R-7B,16.18,/models/falcon-h1r-7b/providers,false\nGemini 1.5 Pro (Sep),15.99411151166208,/models/gemini-1-5-pro/providers,true\nSolar Pro 2 ,15.99411151166208,/models/solar-pro-2-preview/providers,true\nDeepSeek R1 Distill Llama 70B,15.950177423585206,/models/deepseek-r1-distill-llama-70b/providers,true\nClaude 3.5 Sonnet (Oct),15.926898851351913,/models/claude-35-sonnet/providers,true\nDeepSeek R1 Distill Qwen 14B,15.844510353679771,/models/deepseek-r1-distill-qwen-14b/providers,true\nQwen3 30B,15.8,/models/qwen3-30b-a3b-instruct-reasoning/providers,false\nQwen3 Omni 30B A3B,15.78,/models/qwen3-omni-30b-a3b-reasoning/providers,false\nDevstral Small,15.67,/models/devstral-small/providers,false\nQwen2.5 72B,15.557766275790943,/models/qwen2-5-72b-instruct/providers,true\nSonar,15.492770267156052,/models/sonar/providers,true\nSolar Pro 2,15.49,/models/solar-pro-2-reasoning/providers,false\nNVIDIA Nemotron Nano 9B V2,15.45,/models/nvidia-nemotron-nano-9b-v2-reasoning/providers,false\nLlama Nemotron Super 49B v1.5,15.42,/models/llama-nemotron-super-49b-v1-5/providers,false\nQwen3 30B A3B 2507,15.42,/models/qwen3-30b-a3b-2507/providers,false\nMistral Small 3.2,15.31,/models/mistral-small-3-2/providers,false\nQwen3 8B,15.279926185040281,/models/qwen3-8b-instruct-reasoning/providers,true\nK2-V2 (low),15.27,/models/k2-v2-low/providers,false\nSonar Pro,15.225966789192796,/models/sonar-pro/providers,true\nQwQ 32B-Preview,15.173958740020906,/models/QwQ-32B-Preview/providers,true\nNVIDIA Nemotron Nano 12B v2 VL,15.13,/models/nvidia-nemotron-nano-12b-v2-vl-reasoning/providers,false\nMinistral 3 8B,15.12,/models/ministral-3-8b/providers,false\nLlama 3.3 70B,15.1,/models/llama-3-3-instruct-70b/providers,false\nLing-mini-2.0,15.090794964025125,/models/ling-mini-2-0/providers,true\nQwen3 VL 4B,14.896107733773457,/models/qwen3-vl-4b-reasoning/providers,true\nGPT-4o (Nov),14.762496625016151,/models/gpt-4o/providers,true\nGemini 2.0 Flash-Lite (Feb),14.702194584063244,/models/gemini-2-0-flash-lite-001/providers,true\nMistral Large 2 (Nov),14.676681663368385,/models/mistral-large-2/providers,true\nQwen3 VL 8B,14.64,/models/qwen3-vl-8b-instruct/providers,false\nQwen3 32B,14.532286886975053,/models/qwen3-32b-instruct/providers,true\nGPT-4o (May),14.498529042479218,/models/gpt-4o-2024-05-13/providers,true\nGemini 2.0 Flash-Lite (Preview),14.487085601010705,/models/gemini-2-0-flash-lite-preview/providers,true\nLlama 3.1 Nemotron Nano 4B v1.1,14.433728654243223,/models/llama-3-1-nemotron-nano-4b-reasoning/providers,true\nKimi Linear 48B A3B Instruct,14.414576519674707,/models/kimi-linear-48b-a3b-instruct/providers,true\nReka Flash 3,14.349784905144217,/models/reka-flash-3/providers,true\nLlama 3.3 Nemotron Super 49B,14.34668262331532,/models/llama-3-3-nemotron-super-49b/providers,true\nSolar Pro 2,14.24,/models/solar-pro-2/providers,false\nOlmo 3.1 32B Think,14.24,/models/olmo-3-1-32b-think/providers,false\nQwen3 4B,14.220680621459485,/models/qwen3-4b-instruct-reasoning/providers,true\nLlama 3.1 405B,14.2,/models/llama-3-1-instruct-405b/providers,false\nClaude 3.5 Sonnet (June),14.170308711536471,/models/claude-35-sonnet-june-24/providers,true\nTulu3 405B,14.140503152934265,/models/tulu3-405b/providers,true\nGPT-4o (ChatGPT),14.107047509866186,/models/gpt-4o-chatgpt/providers,true\nQwen3 VL 4B,14.078586469580603,/models/qwen3-vl-4b-instruct/providers,true\nLlama 4 Scout,14.02,/models/llama-4-scout/providers,false\nNova Pro,14.015305085876673,/models/nova-pro/providers,true\nLlama 3.1 Nemotron 70B,14.01,/models/llama-3-1-nemotron-instruct-70b/providers,false\nPixtral Large,14.000187381306647,/models/pixtral-large-2411/providers,true\nGPT-5 nano (minimal),13.99,/models/gpt-5-nano-minimal/providers,false\nMistral Small 3.1,13.974509959335386,/models/mistral-small-3-1/providers,true\nNVIDIA Nemotron 3 Nano,13.91,/models/nvidia-nemotron-3-nano-30b-a3b/providers,false\nGrok 2,13.886018572918001,/models/grok-2-1212/providers,true\nGemini 1.5 Flash (Sep),13.791318335280737,/models/gemini-1-5-flash/providers,true\nNVIDIA Nemotron Nano 9B V2,13.76,/models/nvidia-nemotron-nano-9b-v2/providers,false\nGPT-4 Turbo,13.715301370059112,/models/gpt-4-turbo/providers,true\nCommand A,13.71,/models/command-a/providers,false\nHermes 4 70B,13.551468975216904,/models/hermes-4-llama-3-1-70b/providers,true\nGPT-4.1 nano,13.37,/models/gpt-4-1-nano/providers,false\nQwen3 14B,13.35,/models/qwen3-14b-instruct/providers,false\nGrok Beta,13.281894010187271,/models/grok-beta/providers,true\nQwen2.5 Instruct 32B,13.236526421623505,/models/qwen2.5-32b-instruct/providers,true\nPhi-4,13.183090068262993,/models/phi-4/providers,true\nGLM-4.5V,13.16,/models/glm-4-5v/providers,false\nQwen3 4B 2507,13.14,/models/qwen3-4b-2507-instruct/providers,false\nLlama 3.1 70B,13.134271418700061,/models/llama-3-1-instruct-70b/providers,true\nQwen3 1.7B,13.068306567976027,/models/qwen3-1.7b-instruct-reasoning/providers,true\nMistral Large 2 (Jul),13.033082438627458,/models/mistral-large-2407/providers,true\nGemini 2.5 Flash-Lite,13.03,/models/gemini-2-5-flash-lite/providers,false\nQwen2.5 Coder 32B,12.865568210057065,/models/qwen2-5-coder-32b-instruct/providers,true\nGPT-4,12.754307113238253,/models/gpt-4/providers,true\nMistral Small 3,12.667127328098278,/models/mistral-small-3/providers,true\nGPT-4o mini,12.647336534486156,/models/gpt-4o-mini/providers,true\nJamba Reasoning 3B,12.565688353543328,/models/jamba-reasoning-3b/providers,true\nDeepSeek-V2.5 (Dec),12.511590737265823,/models/deepseek-v2-5/providers,true\nNova Lite,12.51,/models/nova-lite/providers,false\nQwen3 4B,12.49184791077126,/models/qwen3-4b-instruct/providers,true\nClaude 3 Opus,12.452455969031023,/models/claude-3-opus/providers,true\nQwen3 30B,12.38,/models/qwen3-30b-a3b-instruct/providers,false\nGemini 2.0 Flash Thinking exp. (Dec),12.331778143209375,/models/gemini-2-0-flash-thinking-exp-1219/providers,true\nDeepSeek-V2.5,12.3252882665075,/models/deepseek-v2-5-sep-2024/providers,true\nLlama 3.1 8B,12.15,/models/llama-3-1-instruct-8b/providers,false\nDevstral Small (May),12.13536984200864,/models/devstral-small-2505/providers,true\nMistral Saba,12.128983524455348,/models/mistral-saba/providers,true\nDeepSeek R1 Distill Llama 8B,12.10028639307147,/models/deepseek-r1-distill-llama-8b/providers,true\nMinistral 3 3B,12.1,/models/ministral-3-3b/providers,false\nOlmo 3.1 32B Instruct,12.04,/models/olmo-3-1-32b-instruct/providers,false\nGemini 1.5 Pro (May),11.995643433356523,/models/gemini-1-5-pro-may-2024/providers,true\nR1 1776,11.989331081091219,/models/r1-1776/providers,true\nQwen2.5 Turbo,11.973563045943266,/models/qwen-turbo/providers,true\nReka Flash,11.967262408951813,/models/reka-flash/providers,true\nEXAONE 4.0 32B,11.95,/models/exaone-4-0-32b/providers,false\nLlama 3.2 90B (Vision),11.90129896307905,/models/llama-3-2-instruct-90b-vision/providers,true\nSolar Mini,11.901298628500898,/models/solar-mini/providers,true\nGrok-1,11.690189040378796,/models/grok-1/providers,true\nQwen2 72B,11.662530563545237,/models/qwen2-72b-instruct/providers,true\nNova Micro,11.550167660288439,/models/nova-micro/providers,true\nGranite 4.0 H Small,11.46,/models/granite-4-0-h-small/providers,false\nGemini 1.5 Flash-8B,11.131677468292837,/models/gemini-1-5-flash-8b/providers,true\nPhi-4 Mini,10.939443779805618,/models/phi-4-mini/providers,true\nDeepHermes 3 - Mistral 24B,10.888270503865678,/models/deephermes-3-mistral-24b-preview/providers,true\nLlama 3.2 11B (Vision),10.887386887368143,/models/llama-3-2-instruct-11b-vision/providers,true\nQwen3 8B,10.79,/models/qwen3-8b-instruct/providers,false\nGranite 3.3 8B,10.789732126843495,/models/granite-3-3-8b-instruct/providers,true\nJamba 1.5 Large,10.695130465051701,/models/jamba-1-5-large/providers,true\nQwen3 Omni 30B A3B,10.67,/models/qwen3-omni-30b-a3b-instruct/providers,false\nHermes 3 - Llama-3.1 70B,10.647383158323558,/models/hermes-3-llama-3-1-70b/providers,true\nDeepSeek-Coder-V2,10.608221945579603,/models/deepseek-coder-v2/providers,true\nOLMo 2 32B,10.566197101720523,/models/olmo-2-32b/providers,true\nJamba 1.6 Large,10.555304867718469,/models/jamba-1-6-large/providers,true\nGemma 3 27B,10.51,/models/gemma-3-27b/providers,false\nQwen3 0.6B,10.504775390875873,/models/qwen3-0.6b-instruct-reasoning/providers,true\nGemini 1.5 Flash (May),10.46126910425808,/models/gemini-1-5-flash-may-2024/providers,true\nNVIDIA Nemotron Nano 12B v2 VL,10.38,/models/nvidia-nemotron-nano-12b-v2-vl/providers,false\nClaude 3 Sonnet,10.270295671896926,/models/claude-3-sonnet/providers,true\nLlama 3 70B,10.192203109851526,/models/llama-3-instruct-70b/providers,true\nMistral Small (Sep),10.178799016274557,/models/mistral-small/providers,true\nGemini 1.0 Ultra,10.146700927796493,/models/gemini-1-0-ultra/providers,true\nPhi-3 Mini,10.10075277153229,/models/phi-3-mini/providers,true\nGemma 3n E4B (May),10.058952665396129,/models/gemma-3n-e4b-preview-0520/providers,true\nPhi-4 Multimodal,10.04043714573622,/models/phi-4-multimodal/providers,true\nQwen2.5 Coder 7B ,9.982467098648165,/models/qwen2-5-coder-7b-instruct/providers,true\nMistral Large (Feb),9.90917085347575,/models/mistral-large/providers,true\nMixtral 8x22B,9.844182670676119,/models/mistral-8x22b-instruct/providers,true\nLlama 2 Chat 7B,9.738523435238086,/models/llama-2-chat-7b/providers,true\nGemma 3n E2B,9.727723997995453,/models/gemma-3n-e2b/providers,true\nLlama 3.2 3B,9.702177369527044,/models/llama-3-2-instruct-3b/providers,true\nJamba 1.7 Large,9.7,/models/jamba-1-7-large/providers,false\nQwen1.5 Chat 110B,9.5481702885671,/models/qwen1.5-110b-chat/providers,true\nGemma 3 12B,9.43,/models/gemma-3-12b/providers,false\nClaude 2.1,9.324651438021968,/models/claude-21/providers,true\nClaude 3 Haiku,9.300141268649064,/models/claude-3-haiku/providers,true\nOLMo 2 7B,9.296750708294477,/models/olmo-2-7b/providers,true\nMolmo 7B-D,9.247608272030813,/models/molmo-7b-d/providers,true\nLlama 3.2 1B,9.13072365693697,/models/llama-3-2-instruct-1b/providers,true\nDeepSeek R1 Distill Qwen 1.5B,9.075326286181491,/models/deepseek-r1-distill-qwen-1-5b/providers,true\nClaude 2.0,9.058555199878974,/models/claude-2/providers,true\nDeepSeek-V2,9.058555199878974,/models/deepseek-v2/providers,true\nMistral Small (Feb),9.039501606668157,/models/mistral-small-2402/providers,true\nMistral Medium,9.010996334814534,/models/mistral-medium/providers,true\nGPT-3.5 Turbo,8.989676386007632,/models/gpt-35-turbo/providers,true\nArctic,8.823244716278813,/models/arctic-instruct/providers,true\nQwen Chat 72B,8.823244716278813,/models/qwen-chat-72b/providers,true\nLFM 40B,8.760765585157351,/models/lfm-40b/providers,true\nLlama 3 8B,8.701018683736894,/models/llama-3-instruct-8b/providers,true\nGemma 3 1B,8.647929714252196,/models/gemma-3-1b/providers,true\nExaone 4.0 1.2B,8.63,/models/exaone-4-0-1-2b-reasoning/providers,false\nExaone 4.0 1.2B,8.62,/models/exaone-4-0-1-2b/providers,false\nPALM-2,8.594046799469973,/models/palm-2/providers,true\nGemini 1.0 Pro,8.501805478425053,/models/gemini-1-0-pro/providers,true\nDeepSeek Coder V2 Lite,8.479458137084212,/models/deepseek-coder-v2-lite/providers,true\nGranite 4.0 H 1B,8.4,/models/granite-4-0-h-nano-1b/providers,false\nGemma 3 270M,8.372813365251153,/models/gemma-3-270m/providers,true\nLlama 2 Chat 70B,8.370802665737399,/models/llama-2-chat-70b/providers,true\nDeepSeek LLM 67B (V1),8.370802665737399,/models/deepseek-llm-67b-chat/providers,true\nLlama 2 Chat 13B,8.35759395005398,/models/llama-2-chat-13b/providers,true\nCommand-R+ (Apr),8.348799720910122,/models/command-r-plus-04-2024/providers,true\nOpenChat 3.5,8.320282434218926,/models/openchat-35/providers,true\nDBRX,8.315903697714282,/models/dbrx/providers,true\nOlmo 3 7B,8.22,/models/olmo-3-7b-instruct/providers,false\nJamba 1.5 Mini,8.027724014032357,/models/jamba-1-5-mini/providers,true\nLFM2.5-1.2B-Instruct,7.95,/models/lfm2-5-1-2b-instruct/providers,false\nJamba 1.7 Mini,7.88,/models/jamba-1-7-mini/providers,false\nJamba 1.6 Mini,7.870810849851021,/models/jamba-1-6-mini/providers,true\nLFM2 2.6B,7.86,/models/lfm2-2-6b/providers,false\nMixtral 8x7B,7.731195590246845,/models/mixtral-8x7b-instruct/providers,true\nGranite 4.0 1B,7.66,/models/granite-4-0-nano-1b/providers,false\nGranite 4.0 Micro,7.65,/models/granite-4-0-micro/providers,false\nDeepHermes 3 - Llama-3.1 8B,7.578083680871105,/models/deephermes-3-llama-3-1-8b-preview/providers,true\nLlama 65B,7.413887665229345,/models/llama-65b/providers,true\nQwen Chat 14B,7.413887665229345,/models/qwen-chat-14b/providers,true\nClaude Instant,7.413887665229345,/models/claude-instant/providers,true\nMistral 7B,7.413887665229345,/models/mistral-7b-instruct/providers,true\nCommand-R (Mar),7.413887665229345,/models/command-r-03-2024/providers,true\nLFM2 8B A1B,7.32,/models/lfm2-8b-a1b/providers,false\nMolmo2-8B,7.31,/models/molmo2-8b/providers,false\nGranite 4.0 350M,7.2,/models/granite-4-0-350m/providers,false\nLFM2 1.2B,6.84,/models/lfm2-1-2b/providers,false\nQwen3 1.7B,6.78,/models/qwen3-1.7b-instruct/providers,false\nGemma 3 4B,6.61,/models/gemma-3-4b/providers,false\nGemma 3n E4B,6.27,/models/gemma-3n-e4b/providers,false\nQwen3 0.6B,6.14,/models/qwen3-0.6b-instruct/providers,false\nGranite 4.0 H 350M,6.02,/models/granite-4-0-h-350m/providers,false"}
Artificial Analysis Intelligence Index by Open Weights vs Proprietary
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index by Open Weights vs Proprietary","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Image & Video Leaderboards
Top models from our Image Arena and Video Arena leaderboards, with 95% confidence intervals
Text to Image Leaderboard
Frontier Language Model Intelligence, Over Time
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Intelligence Evaluations
While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Artificial Analysis Omniscience
AA-Omniscience is a knowledge and hallucination benchmark that rewards accuracy, punishes bad guesses and provides a comprehensive view of which models produce factually reliable outputs across different domains
AA-Omniscience Index
AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.
{"@context":"https://schema.org","@type":"Dataset","name":"AA-Omniscience Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":"modelName,omniscienceIndex,detailsUrl,isLabClaimedValue\nGemini 3 Pro Preview (high),12.867,/models/gemini-3-pro/providers,false\nClaude Opus 4.5,10.233,/models/claude-opus-4-5-thinking/providers,false\nGemini 3 Flash,8.233,/models/gemini-3-flash-reasoning/providers,false\nClaude 4.1 Opus,4.933,/models/claude-4-1-opus-thinking/providers,false\nGPT-5.1 (high),2.2,/models/gpt-5-1/providers,false\nGrok 4,0.95,/models/grok-4/providers,false\nRing-1T,0.017,/models/ring-1t/providers,false\nJamba 1.7 Large,-0.217,/models/jamba-1-7-large/providers,false\nLing-mini-2.0,-0.45,/models/ling-mini-2-0/providers,false\nJamba 1.7 Mini,-0.5,/models/jamba-1-7-mini/providers,false\nGemini 3 Flash,-0.917,/models/gemini-3-flash/providers,false\nGemini 3 Pro Preview (low),-1.05,/models/gemini-3-pro-low/providers,false\nClaude 3.7 Sonnet,-1.733,/models/claude-3-7-sonnet-thinking/providers,false\nClaude 4 Sonnet,-1.767,/models/claude-4-sonnet-thinking/providers,false\nClaude 4.5 Sonnet,-2.083,/models/claude-4-5-sonnet-thinking/providers,false\nGPT-5.2 (medium),-2.7,/models/gpt-5-2-medium/providers,false\nGPT-5.2 (xhigh),-4.317,/models/gpt-5-2/providers,false\nClaude 4.5 Haiku,-5.667,/models/claude-4-5-haiku-reasoning/providers,false\nClaude Opus 4.5,-6.45,/models/claude-opus-4-5/providers,false\nGPT-5.1 Codex (high),-7.017,/models/gpt-5-1-codex/providers,false\nGrok 3 mini Reasoning (high),-7.15,/models/grok-3-mini-reasoning/providers,false\nClaude 4.5 Haiku,-7.95,/models/claude-4-5-haiku/providers,false\nGPT-5 Codex (high),-9.667,/models/gpt-5-codex/providers,false\nClaude 4 Sonnet,-10.367,/models/claude-4-sonnet/providers,false\nClaude 4.5 Sonnet,-10.65,/models/claude-4-5-sonnet/providers,false\nClaude 3.7 Sonnet,-10.983,/models/claude-3-7-sonnet/providers,false\nGPT-5 (high),-11.1,/models/gpt-5/providers,false\nGPT-4o (Nov),-12.05,/models/gpt-4o/providers,false\no1,-12.817,/models/o1/providers,false\nGPT-5 (low),-12.933,/models/gpt-5-low/providers,false\nGPT-5 mini (medium),-12.933,/models/gpt-5-mini-medium/providers,false\nGPT-5 (medium),-13.733,/models/gpt-5-medium/providers,false\nGPT-5.2,-15.4,/models/gpt-5-2-non-reasoning/providers,false\no3,-17.183,/models/o3/providers,false\nGemini 2.5 Pro,-17.95,/models/gemini-2-5-pro/providers,false\nLlama 3.1 405B,-18.167,/models/llama-3-1-instruct-405b/providers,false\nGPT-5.1 Codex mini (high),-18.283,/models/gpt-5-1-codex-mini/providers,false\nDeepSeek V3.2 Speciale,-19.233,/models/deepseek-v3-2-speciale/providers,false\nGPT-5 mini (high),-19.617,/models/gpt-5-mini/providers,false\nGPT-4o (Aug),-21.733,/models/gpt-4o-2024-08-06/providers,false\nDeepSeek V3.2,-23.317,/models/deepseek-v3-2-reasoning/providers,false\nClaude 3.5 Haiku,-23.35,/models/claude-3-5-haiku/providers,false\nKimi K2 Thinking,-23.417,/models/kimi-k2-thinking/providers,false\nQwen3 Coder 480B,-23.967,/models/qwen3-coder-480b-a35b-instruct/providers,false\nGLM-4.6V,-26.25,/models/glm-4-6v-reasoning/providers,false\nDeepSeek V3.1 Terminus,-26.7,/models/deepseek-v3-1-terminus-reasoning/providers,false\nGPT-5 nano (medium),-27.35,/models/gpt-5-nano-medium/providers,false\nCogito v2.1,-27.417,/models/cogito-v2-1-reasoning/providers,false\nMagistral Medium 1.2,-27.633,/models/magistral-medium-2509/providers,false\nMagistral Medium 1,-28,/models/magistral-medium/providers,false\nKimi K2 0905,-28.35,/models/kimi-k2-0905/providers,false\nGLM-4.5,-29.017,/models/glm-4.5/providers,false\nGPT-5 nano (high),-29.65,/models/gpt-5-nano/providers,false\nDeepSeek R1 0528,-29.667,/models/deepseek-r1/providers,false\nMiniMax-M2.1,-29.8,/models/minimax-m2-1/providers,false\nKimi K2,-30.117,/models/kimi-k2/providers,false\nGrok 4 Fast,-30.5,/models/grok-4-fast-reasoning/providers,false\nDeepSeek V3.1,-30.583,/models/deepseek-v3-1-reasoning/providers,false\nGemini 2.5 Flash,-30.85,/models/gemini-2-5-flash-reasoning/providers,false\nGrok 4.1 Fast,-31.383,/models/grok-4-1-fast-reasoning/providers,false\nLlama 3.1 8B,-31.6,/models/llama-3-1-instruct-8b/providers,false\nDeepSeek V3.2 Exp,-31.9,/models/deepseek-v3-2-reasoning-0925/providers,false\nMistral Medium 3,-32.617,/models/mistral-medium-3/providers,false\nDevstral Medium,-32.8,/models/devstral-medium/providers,false\nGLM-4.6,-33.25,/models/glm-4-6/providers,false\nDeepSeek R1 (Jan),-33.633,/models/deepseek-r1-0120/providers,false\nHermes 4 405B,-34.633,/models/hermes-4-llama-3-1-405b/providers,false\nGrok 3,-35.267,/models/grok-3/providers,false\nMistral Large 2 (Nov),-35.517,/models/mistral-large-2/providers,false\nKAT-Coder-Pro V1,-35.533,/models/kat-coder-pro-v1/providers,false\nDoubao Seed Code,-35.933,/models/doubao-seed-code/providers,false\nGLM-4.7,-36.267,/models/glm-4-7/providers,false\nGPT-5.1,-36.583,/models/gpt-5-1-non-reasoning/providers,false\nGPT-5 (minimal),-36.667,/models/gpt-5-minimal/providers,false\nERNIE 4.5 300B A47B,-36.833,/models/ernie-4-5-300b-a47b/providers,false\no4-mini (high),-37.183,/models/o4-mini/providers,false\nHermes 4 405B,-37.367,/models/hermes-4-llama-3-1-405b-reasoning/providers,false\nGemini 2.5 Flash (Sep),-37.5,/models/gemini-2-5-flash-preview-09-2025-reasoning/providers,false\nGrok Code Fast 1,-38.033,/models/grok-code-fast-1/providers,false\nNova Premier,-38.317,/models/nova-premier/providers,false\nGLM-4.6V,-38.65,/models/glm-4-6v/providers,false\nOlmo 3.1 32B Think,-39.483,/models/olmo-3-1-32b-think/providers,false\nQwen3 Max Thinking,-39.783,/models/qwen3-max-thinking/providers,false\nMistral Large 3,-40.983,/models/mistral-large-3/providers,false\nGemini 2.5 Flash (Sep),-41.317,/models/gemini-2-5-flash-preview-09-2025/providers,false\nLlama 3.1 Nemotron 70B,-41.417,/models/llama-3-1-nemotron-instruct-70b/providers,false\nMiMo-V2-Flash,-41.833,/models/mimo-v2-flash-reasoning/providers,false\nGPT-4.1,-42.133,/models/gpt-4-1/providers,false\nDeepSeek V3 0324,-42.283,/models/deepseek-v3-0324/providers,false\nERNIE 5.0 Thinking Preview,-42.367,/models/ernie-5-0-thinking-preview/providers,false\nNVIDIA Nemotron Nano 9B V2,-43.217,/models/nvidia-nemotron-nano-9b-v2-reasoning/providers,false\nLlama 4 Maverick,-43.467,/models/llama-4-maverick/providers,false\nDeepSeek V3.1,-43.533,/models/deepseek-v3-1/providers,false\nNova Lite,-43.55,/models/nova-lite/providers,false\nQwen3 Max (Preview),-43.567,/models/qwen3-max-preview/providers,false\nDeepSeek V3 (Dec),-43.633,/models/deepseek-v3/providers,false\nGemini 2.5 Flash-Lite (Sep),-43.717,/models/gemini-2-5-flash-lite-preview-09-2025/providers,false\nGemini 2.5 Flash,-43.75,/models/gemini-2-5-flash/providers,false\nGLM-4.6,-43.883,/models/glm-4-6-reasoning/providers,false\no3-mini (high),-44.283,/models/o3-mini-high/providers,false\nGemini 2.0 Flash,-44.333,/models/gemini-2-0-flash/providers,false\nLlama 3.1 70B,-44.417,/models/llama-3-1-instruct-70b/providers,false\nDeepSeek V3.1 Terminus,-44.583,/models/deepseek-v3-1-terminus/providers,false\nMiMo-V2-Flash,-44.6,/models/mimo-v2-flash/providers,false\nQwen3 Max,-44.9,/models/qwen3-max/providers,false\nMagistral Small 1,-45.2,/models/magistral-small/providers,false\nQwen3 235B 2507,-45.383,/models/qwen3-235b-a22b-instruct-2507/providers,false\nQwen3 235B,-45.55,/models/qwen3-235b-a22b-instruct-reasoning/providers,false\nLlama Nemotron Ultra,-46.2,/models/llama-3-1-nemotron-ultra-253b-v1-reasoning/providers,false\nGLM-4.5V,-46.417,/models/glm-4-5v-reasoning/providers,false\nQwen3 VL 235B A22B,-46.567,/models/qwen3-vl-235b-a22b-reasoning/providers,false\nGemini 2.5 Flash-Lite,-46.983,/models/gemini-2-5-flash-lite-reasoning/providers,false\nLlama Nemotron Super 49B v1.5,-47.2,/models/llama-nemotron-super-49b-v1-5/providers,false\nDeepSeek R1 Distill Llama 70B,-47.433,/models/deepseek-r1-distill-llama-70b/providers,false\nLlama Nemotron Super 49B v1.5,-47.467,/models/llama-nemotron-super-49b-v1-5-reasoning/providers,false\nNova 2.0 Pro Preview (low),-47.5,/models/nova-2-0-pro-reasoning-low/providers,false\nQwen3 235B A22B 2507,-47.7,/models/qwen3-235b-a22b-instruct-2507-reasoning/providers,false\nMistral Medium 3.1,-47.9,/models/mistral-medium-3-1/providers,false\nDevstral 2,-47.917,/models/devstral-2/providers,false\nGLM-4.7,-48.233,/models/glm-4-7-non-reasoning/providers,false\nNova Pro,-48.517,/models/nova-pro/providers,false\nDeepSeek V3.2,-48.683,/models/deepseek-v3-2/providers,false\nK2-V2 (low),-49.017,/models/k2-v2-low/providers,false\nDeepSeek V3.2 Exp,-49.117,/models/deepseek-v3-2-0925/providers,false\nNova Micro,-49.35,/models/nova-micro/providers,false\nMiniMax-M2,-49.533,/models/minimax-m2/providers,false\nCommand A,-49.583,/models/command-a/providers,false\nHermes 4 70B,-50.033,/models/hermes-4-llama-3-1-70b/providers,false\nMiniMax M1 80k,-50.167,/models/minimax-m1-80k/providers,false\nNova 2.0 Pro Preview (medium),-50.3,/models/nova-2-0-pro-reasoning-medium/providers,false\nQwen3 14B,-50.317,/models/qwen3-14b-instruct-reasoning/providers,false\nNova 2.0 Pro Preview,-50.367,/models/nova-2-0-pro/providers,false\nK2-V2 (medium),-50.6,/models/k2-v2-medium/providers,false\nHermes 4 70B,-50.717,/models/hermes-4-llama-3-1-70b-reasoning/providers,false\nLlama 3.3 Nemotron Super 49B,-51.017,/models/llama-3-3-nemotron-super-49b/providers,false\nMistral Small 3.2,-51.3,/models/mistral-small-3-2/providers,false\nNova 2.0 Omni (low),-51.4,/models/nova-2-0-omni-reasoning-low/providers,false\nQwen3 32B,-51.5,/models/qwen3-32b-instruct-reasoning/providers,false\nQwen3 Coder 30B A3B,-51.7,/models/qwen3-coder-30b-a3b-instruct/providers,false\ngpt-oss-120B (high),-51.933,/models/gpt-oss-120b/providers,false\nDevstral Small,-51.967,/models/devstral-small/providers,false\nHyperCLOVA X SEED Think (32B),-51.983,/models/hyperclova-x-seed-think-32b/providers,false\nMistral Small 3.1,-52.183,/models/mistral-small-3-1/providers,false\nGrok 4.1 Fast,-52.317,/models/grok-4-1-fast/providers,false\nQwen3 30B,-52.333,/models/qwen3-30b-a3b-instruct-reasoning/providers,false\nNVIDIA Nemotron 3 Nano,-52.383,/models/nvidia-nemotron-3-nano-30b-a3b-reasoning/providers,false\nINTELLECT-3,-52.383,/models/intellect-3/providers,false\nQwen3 Next 80B A3B,-52.783,/models/qwen3-next-80b-a3b-reasoning/providers,false\nLlama 4 Scout,-53.05,/models/llama-4-scout/providers,false\nQwen3 VL 32B,-53.233,/models/qwen3-vl-32b-reasoning/providers,false\nOlmo 3.1 32B Instruct,-53.317,/models/olmo-3-1-32b-instruct/providers,false\nQwen2.5 72B,-53.517,/models/qwen2-5-72b-instruct/providers,false\nSeed-OSS-36B-Instruct,-53.533,/models/seed-oss-36b-instruct/providers,false\nQwen3 VL 8B,-53.8,/models/qwen3-vl-8b-instruct/providers,false\nQwen3 4B 2507,-53.833,/models/qwen3-4b-2507-instruct/providers,false\nQwen3 VL 235B A22B,-53.867,/models/qwen3-vl-235b-a22b-instruct/providers,false\nQwen3 VL 8B,-54.317,/models/qwen3-vl-8b-reasoning/providers,false\nQwen3 235B,-54.333,/models/qwen3-235b-a22b-instruct/providers,false\nGemini 2.5 Flash-Lite (Sep),-54.633,/models/gemini-2-5-flash-lite-preview-09-2025-reasoning/providers,false\nQwen3 4B 2507,-54.667,/models/qwen3-4b-2507-instruct-reasoning/providers,false\nNova 2.0 Lite (low),-54.95,/models/nova-2-0-lite-reasoning-low/providers,false\nLFM2 2.6B,-54.95,/models/lfm2-2-6b/providers,false\nLlama 3 70B,-54.95,/models/llama-3-instruct-70b/providers,false\nMi:dm K 2.5 Pro,-55.217,/models/mi-dm-k-2-5-pro-dec28/providers,false\nLlama 3.2 1B,-55.433,/models/llama-3-2-instruct-1b/providers,false\nLlama 3.3 70B,-55.467,/models/llama-3-3-instruct-70b/providers,false\nGPT-5 mini (minimal),-55.6,/models/gpt-5-mini-minimal/providers,false\nGrok 4 Fast,-55.683,/models/grok-4-fast/providers,false\nGPT-4.1 mini,-55.7,/models/gpt-4-1-mini/providers,false\nApriel-v1.5-15B-Thinker,-55.85,/models/apriel-v1-5-15b-thinker/providers,false\ngpt-oss-120B (low),-55.933,/models/gpt-oss-120b-low/providers,false\nMi:dm K 2.5 Pro Preview,-56.017,/models/midm-250-pro-rsnsft/providers,false\nPhi-4,-56.167,/models/phi-4/providers,false\nGLM-4.5V,-56.867,/models/glm-4-5v/providers,false\nLing-1T,-57.167,/models/ling-1t/providers,false\nK2-V2 (high),-57.283,/models/k2-v2/providers,false\nQwen3 30B A3B 2507,-57.433,/models/qwen3-30b-a3b-2507-reasoning/providers,false\nSolar Pro 2,-57.533,/models/solar-pro-2-reasoning/providers,false\nNova 2.0 Lite (medium),-57.633,/models/nova-2-0-lite-reasoning-medium/providers,false\nDevstral Small (May),-58.017,/models/devstral-small-2505/providers,false\nNVIDIA Nemotron Nano 9B V2,-58.383,/models/nvidia-nemotron-nano-9b-v2/providers,false\nK-EXAONE,-58.75,/models/k-exaone/providers,false\nDevstral Small 2,-58.883,/models/devstral-small-2/providers,false\nGPT-4.1 nano,-58.95,/models/gpt-4-1-nano/providers,false\nQwen3 VL 30B A3B,-59.133,/models/qwen3-vl-30b-a3b-reasoning/providers,false\nGemini 2.5 Flash-Lite,-59.45,/models/gemini-2-5-flash-lite/providers,false\nNova 2.0 Omni (medium),-59.7,/models/nova-2-0-omni-reasoning-medium/providers,false\nRing-flash-2.0,-59.767,/models/ring-flash-2-0/providers,false\nApriel-v1.6-15B-Thinker,-59.833,/models/apriel-v1-6-15b-thinker/providers,false\nOlmo 3 32B Think,-60.25,/models/olmo-3-32b-think/providers,false\nNova 2.0 Lite,-60.483,/models/nova-2-0-lite/providers,false\nQwen3 Next 80B A3B,-60.483,/models/qwen3-next-80b-a3b-instruct/providers,false\ngpt-oss-20B (low),-60.6,/models/gpt-oss-20b-low/providers,false\nEXAONE 4.0 32B,-61.417,/models/exaone-4-0-32b-reasoning/providers,false\nQwen3 Omni 30B A3B,-61.767,/models/qwen3-omni-30b-a3b-reasoning/providers,false\nFalcon-H1R-7B,-61.917,/models/falcon-h1r-7b/providers,false\nGranite 4.0 H Small,-62.067,/models/granite-4-0-h-small/providers,false\nMotif-2-12.7B,-62.233,/models/motif-2-12-7b/providers,false\nPhi-4 Mini,-62.7,/models/phi-4-mini/providers,false\nJamba Reasoning 3B,-62.833,/models/jamba-reasoning-3b/providers,false\nLlama 3.2 11B (Vision),-62.967,/models/llama-3-2-instruct-11b-vision/providers,false\nSolar Pro 2,-63.117,/models/solar-pro-2/providers,false\nGLM-4.5-Air,-63.15,/models/glm-4-5-air/providers,false\nGranite 4.0 350M,-63.683,/models/granite-4-0-350m/providers,false\nQwen3 VL 32B,-63.9,/models/qwen3-vl-32b-instruct/providers,false\nMinistral 3 3B,-63.967,/models/ministral-3-3b/providers,false\nQwen3 VL 30B A3B,-64.033,/models/qwen3-vl-30b-a3b-instruct/providers,false\nEXAONE 4.0 32B,-64.3,/models/exaone-4-0-32b/providers,false\ngpt-oss-20B (high),-64.9,/models/gpt-oss-20b/providers,false\nNVIDIA Nemotron 3 Nano,-65.2,/models/nvidia-nemotron-3-nano-30b-a3b/providers,false\nNova 2.0 Omni,-65.233,/models/nova-2-0-omni/providers,false\nReka Flash 3,-65.233,/models/reka-flash-3/providers,false\nDeepSeek R1 0528 Qwen3 8B,-65.317,/models/deepseek-r1-qwen3-8b/providers,false\nK-EXAONE,-65.95,/models/k-exaone-non-reasoning/providers,false\nQwen3 8B,-66.117,/models/qwen3-8b-instruct-reasoning/providers,false\nMistral 7B,-66.25,/models/mistral-7b-instruct/providers,false\nNVIDIA Nemotron Nano 12B v2 VL,-66.35,/models/nvidia-nemotron-nano-12b-v2-vl-reasoning/providers,false\nGPT-5 nano (minimal),-66.367,/models/gpt-5-nano-minimal/providers,false\nMagistral Small 1.2,-66.383,/models/magistral-small-2509/providers,false\nQwen3 30B A3B 2507,-66.8,/models/qwen3-30b-a3b-2507/providers,false\nMinistral 3 14B,-67.383,/models/ministral-3-14b/providers,false\nLing-flash-2.0,-67.45,/models/ling-flash-2-0/providers,false\nGemma 3 27B,-67.95,/models/gemma-3-27b/providers,false\nQwen3 30B,-67.983,/models/qwen3-30b-a3b-instruct/providers,false\nQwen3 14B,-68.3,/models/qwen3-14b-instruct/providers,false\nMolmo2-8B,-69.433,/models/molmo2-8b/providers,false\nQwen3 Omni 30B A3B,-69.75,/models/qwen3-omni-30b-a3b-instruct/providers,false\nMinistral 3 8B,-69.983,/models/ministral-3-8b/providers,false\nLFM2 1.2B,-71.217,/models/lfm2-1-2b/providers,false\nLlama 3 8B,-71.65,/models/llama-3-instruct-8b/providers,false\nNVIDIA Nemotron Nano 12B v2 VL,-73.167,/models/nvidia-nemotron-nano-12b-v2-vl/providers,false\nOlmo 3 7B Think,-73.967,/models/olmo-3-7b-think/providers,false\nGranite 4.0 H 1B,-74.383,/models/granite-4-0-h-nano-1b/providers,false\nLFM2.5-1.2B-Instruct,-74.75,/models/lfm2-5-1-2b-instruct/providers,false\nQwen3 8B,-75.4,/models/qwen3-8b-instruct/providers,false\nGemma 3 12B,-77.25,/models/gemma-3-12b/providers,false\nLFM2 8B A1B,-77.517,/models/lfm2-8b-a1b/providers,false\nOlmo 3 7B,-78.183,/models/olmo-3-7b-instruct/providers,false\nGranite 4.0 Micro,-78.35,/models/granite-4-0-micro/providers,false\nQwen3 1.7B,-78.35,/models/qwen3-1.7b-instruct-reasoning/providers,false\nGranite 3.3 8B,-78.95,/models/granite-3-3-8b-instruct/providers,false\nGemma 3 1B,-80.25,/models/gemma-3-1b/providers,false\nGemma 3n E2B,-80.617,/models/gemma-3n-e2b/providers,false\nGemma 3n E4B,-81.983,/models/gemma-3n-e4b/providers,false\nQwen3 1.7B,-82.367,/models/qwen3-1.7b-instruct/providers,false\nQwen3 0.6B,-82.45,/models/qwen3-0.6b-instruct-reasoning/providers,false\nExaone 4.0 1.2B,-82.467,/models/exaone-4-0-1-2b-reasoning/providers,false\nGranite 4.0 1B,-83,/models/granite-4-0-nano-1b/providers,false\nExaone 4.0 1.2B,-83.167,/models/exaone-4-0-1-2b/providers,false\nGemma 3 4B,-83.817,/models/gemma-3-4b/providers,false\nQwen3 0.6B,-86.85,/models/qwen3-0.6b-instruct/providers,false\nGranite 4.0 H 350M,-89.467,/models/granite-4-0-h-350m/providers,false"}
GDPval-AA
GDPval-AA evaluates AI models on real-world, economically valuable tasks across a wide range of occupations
GDPval-AA Leaderboard
Artificial Analysis Openness Index
Artificial Analysis Openness Index assesses how 'open' models are on the basis of their availability and transparency across different components.
Artificial Analysis Openness Index: Components
Artificial Analysis Openness Index vs. Artificial Analysis Intelligence Index
Output Tokens
Output tokens of leading AI models based on our independent evaluations
Output Tokens Used to Run Artificial Analysis Intelligence Index
The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
Cost Efficiency
Cost of leading AI models based on our independent evaluations
Cost to Run Artificial Analysis Intelligence Index
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Speed & Latency
Comparison of first-party API performance
Output Speed
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
{"@context":"https://schema.org","@type":"Dataset","name":"Output Speed","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Output Tokens per Second; Higher is better","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Price
Price of leading AI models based on our independent evaluations
Pricing: Input and Output Prices
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Comprehensive benchmarking of GPUs for language model inference
Compare leading Text to Video and Image to Video models
Compare leading Image Generation and Image Editing models
Compare leading Text to Speech models
API Provider Performance
gpt-oss-120B (high)
gpt-oss-20B (high)
GPT-5.2 (xhigh)
Llama 4 Maverick
Gemini 3 Flash
Gemini 3 Pro Preview (high)
Claude Opus 4.5
Claude 4.5 Sonnet
Mistral Large 3DeepSeek V3.2
Falcon-H1R-7B
Grok 4.1 Fast
Grok 4
Nova 2.0 Pro Preview (medium)
Nova 2.0 Lite (medium)
MiniMax-M2.1
NVIDIA Nemotron 3 Nano
Kimi K2 Thinking
K-EXAONEMiMo-V2-Flash
KAT-Coder-Pro V1
K2-V2 (high)
Mi:dm K 2.5 Pro
HyperCLOVA X SEED Think (32B)GLM-4.7
Qwen3 235B A22B 2507
GPT-5.1 (high)
Output Speed vs. Price: gpt-oss-120B (high)
Smaller, emerging providers are offering high output speed and at competitive prices.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Pricing (Input and Output Prices): gpt-oss-120B (high)
The relative importance of input vs. output token prices varies by use case. E.g. Generation tasks are typically more output token weighted while document processing tasks are more input token weighted.
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Output Speed: gpt-oss-120B (high)
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).