Personalized Model Recommendation
Get personalized recommendations based on your priorities for intelligence, speed, and cost.
Intelligence
Intelligence of leading AI models based on our independent evaluations
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index: Includes GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt evaluations spanning reasoning, knowledge, math & coding; Evaluation results measured independently by Artificial Analysis","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://artificialanalysis.ai/docs/legal/Terms-of-Use.pdf","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":"modelName,intelligenceIndex,detailsUrl,isLabClaimedValue\nGemini 3.1 Pro Preview,57.18,/models/gemini-3-1-pro-preview,false\nGPT-5.3 Codex (xhigh),53.97,/models/gpt-5-3-codex,false\nClaude Opus 4.6 (max),52.95,/models/claude-opus-4-6-adaptive,false\nClaude Sonnet 4.6 (max),51.72,/models/claude-sonnet-4-6-adaptive,false\nGPT-5.2 (xhigh),51.28,/models/gpt-5-2,false\nGLM-5,49.77,/models/glm-5,false\nClaude Opus 4.5,49.73,/models/claude-opus-4-5-thinking,false\nGPT-5.2 Codex (xhigh),49.03,/models/gpt-5-2-codex,false\nGemini 3 Pro Preview (high),48.39,/models/gemini-3-pro,false\nGPT-5.1 (high),47.7,/models/gpt-5-1,false\nKimi K2.5,46.81,/models/kimi-k2-5,false\nGPT-5.2 (medium),46.64,/models/gpt-5-2-medium,false\nClaude Opus 4.6,46.46,/models/claude-opus-4-6,false\nGemini 3 Flash,46.43,/models/gemini-3-flash-reasoning,false\nQwen3.5 397B A17B,45.05,/models/qwen3-5-397b-a17b,false\nGPT-5 (high),44.63,/models/gpt-5,false\nGPT-5 Codex (high),44.63,/models/gpt-5-codex,false\nClaude Sonnet 4.6,44.38,/models/claude-sonnet-4-6,false\nGPT-5.1 Codex (high),43.11,/models/gpt-5-1-codex,false\nClaude Opus 4.5,43.09,/models/claude-opus-4-5,false\nClaude 4.5 Sonnet,43.03,/models/claude-4-5-sonnet-thinking,false\n\"Claude Sonnet 4.6 (Non-reasoning, Low Effort)\",42.6,/models/claude-sonnet-4-6-non-reasoning-low-effort,false\nGLM-4.7,42.11,/models/glm-4-7,false\nQwen3.5 27B,42.07,/models/qwen3-5-27b,false\nGPT-5 (medium),42.03,/models/gpt-5-medium,false\nMiniMax-M2.5,41.93,/models/minimax-m2-5,false\nDeepSeek V3.2,41.71,/models/deepseek-v3-2-reasoning,false\nQwen3.5 122B A10B,41.6,/models/qwen3-5-122b-a10b,false\nGrok 4,41.52,/models/grok-4,false\nMiMo-V2-Flash (Feb 2026),41.46,/models/mimo-v2-0206,false\nGemini 3 Pro Preview (low),41.3,/models/gemini-3-pro-low,false\nGPT-5 mini (high),41.17,/models/gpt-5-mini,false\nKimi K2 Thinking,40.89,/models/kimi-k2-thinking,false\no3-pro,40.68990197347841,/models/o3-pro,true\nGLM-5,40.57,/models/glm-5-non-reasoning,false\nQwen3.5 397B A17B,40.1,/models/qwen3-5-397b-a17b-non-reasoning,false\nQwen3 Max Thinking,39.85,/models/qwen3-max-thinking,false\nMiniMax-M2.1,39.42,/models/minimax-m2-1,false\nMiMo-V2-Flash,39.24,/models/mimo-v2-flash-reasoning,false\nGPT-5 (low),39.2,/models/gpt-5-low,false\nGPT-5 mini (medium),38.94,/models/gpt-5-mini-medium,false\nClaude 4 Sonnet,38.66,/models/claude-4-sonnet-thinking,false\nGPT-5.1 Codex mini (high),38.63,/models/gpt-5-1-codex-mini,false\nGrok 4.1 Fast,38.61,/models/grok-4-1-fast-reasoning,false\no3,38.37,/models/o3,false\nKimi K2.5,37.27,/models/kimi-k2-5-non-reasoning,false\nClaude 4.5 Sonnet,37.14,/models/claude-4-5-sonnet,false\nQwen3.5 35B A3B,37.12,/models/qwen3-5-35b-a3b,false\nClaude 4.5 Haiku,37.09,/models/claude-4-5-haiku-reasoning,false\nMiniMax-M2,36.09,/models/minimax-m2,false\nKAT-Coder-Pro V1,36.03,/models/kat-coder-pro-v1,false\nNova 2.0 Pro Preview (medium),35.71,/models/nova-2-0-pro-reasoning-medium,false\nGrok 4 Fast,35.06,/models/grok-4-fast-reasoning,false\nGemini 3 Flash,35.05,/models/gemini-3-flash,false\nClaude 3.7 Sonnet,34.71,/models/claude-3-7-sonnet-thinking,false\nGemini 2.5 Pro,34.63,/models/gemini-2-5-pro,false\nGLM-4.7,34.16,/models/glm-4-7-non-reasoning,false\nDeepSeek V3.2 Speciale,34.07946327638502,/models/deepseek-v3-2-speciale,true\nDeepSeek V3.1 Terminus,33.93,/models/deepseek-v3-1-terminus-reasoning,false\nGPT-5.2,33.57,/models/gpt-5-2-non-reasoning,false\nDoubao Seed Code,33.52,/models/doubao-seed-code,false\ngpt-oss-120B (high),33.27,/models/gpt-oss-120b,false\no4-mini (high),33.06,/models/o4-mini,false\nClaude 4 Sonnet,33,/models/claude-4-sonnet,false\nDeepSeek V3.2 Exp,32.94,/models/deepseek-v3-2-reasoning-0925,false\nMercury 2,32.82,/models/mercury-2,false\nGLM-4.6,32.51,/models/glm-4-6-reasoning,false\nQwen3 Max Thinking (Preview),32.48,/models/qwen3-max-thinking-preview,false\nK-EXAONE,32.12,/models/k-exaone,false\nDeepSeek V3.2,32.09,/models/deepseek-v3-2,false\nGrok 3 mini Reasoning (high),32.08,/models/grok-3-mini-reasoning,false\nNova 2.0 Pro Preview (low),31.9,/models/nova-2-0-pro-reasoning-low,false\nClaude 4.1 Opus,31.888270712229925,/models/claude-4-1-opus-thinking,true\nQwen3 Max,31.38,/models/qwen3-max,false\nGemini 2.5 Flash (Sep),31.14,/models/gemini-2-5-flash-preview-09-2025-reasoning,false\nClaude 4.5 Haiku,31.05,/models/claude-4-5-haiku,false\nKimi K2 0905,30.85,/models/kimi-k2-0905,false\nClaude 3.7 Sonnet,30.81,/models/claude-3-7-sonnet,false\no1,30.75,/models/o1,false\nMiMo-V2-Flash,30.35,/models/mimo-v2-flash,false\nGemini 2.5 Pro (Mar),30.295702949046902,/models/gemini-2-5-pro-03-25,true\nGLM-4.6,30.24,/models/glm-4-6,false\nGLM-4.7-Flash,30.15,/models/glm-4-7-flash,false\nNova 2.0 Lite (medium),29.73,/models/nova-2-0-lite-reasoning-medium,false\nGemini 2.5 Pro (May),29.54758250414228,/models/gemini-2-5-pro-05-06,true\nQwen3 235B A22B 2507,29.54,/models/qwen3-235b-a22b-instruct-2507-reasoning,false\nERNIE 5.0 Thinking Preview,29.09,/models/ernie-5-0-thinking-preview,false\nGrok Code Fast 1,28.74,/models/grok-code-fast-1,false\nDeepSeek V3.1 Terminus,28.52,/models/deepseek-v3-1-terminus,false\nDeepSeek V3.2 Exp,28.44,/models/deepseek-v3-2-0925,false\nApriel-v1.5-15B-Thinker,28.33194461137262,/models/apriel-v1-5-15b-thinker,true\nQwen3 Coder Next,28.28,/models/qwen3-coder-next,false\nDeepSeek V3.1,28.13,/models/deepseek-v3-1,false\nNova 2.0 Omni (medium),28.02,/models/nova-2-0-omni-reasoning-medium,false\nDeepSeek V3.1,27.71,/models/deepseek-v3-1-reasoning,false\nQwen3 VL 235B A22B,27.64,/models/qwen3-vl-235b-a22b-reasoning,false\nApriel-v1.6-15B-Thinker,27.58,/models/apriel-v1-6-15b-thinker,false\nGPT-5.1,27.42,/models/gpt-5-1-non-reasoning,false\nClaude 4 Opus,27.361794173399694,/models/claude-4-opus-thinking,true\nMagistral Medium 1.2,27.1,/models/magistral-medium-2509,false\nDeepSeek R1 0528,27.07,/models/deepseek-r1,false\nGemini 2.5 Flash,27.04,/models/gemini-2-5-flash-reasoning,false\nGPT-5 nano (high),26.83,/models/gpt-5-nano,false\nQwen3 Next 80B A3B,26.72,/models/qwen3-next-80b-a3b-reasoning,false\nGLM-4.5,26.42,/models/glm-4.5,false\nKimi K2,26.32,/models/kimi-k2,false\nGPT-4.1,26.28,/models/gpt-4-1,false\nQwen3 Max (Preview),26.08,/models/qwen3-max-preview,false\nGPT-5 nano (medium),25.88,/models/gpt-5-nano-medium,false\no3-mini,25.863248068693565,/models/o3-mini,true\no1-pro,25.760825664214558,/models/o1-pro,true\nGemini 2.5 Flash (Sep),25.7,/models/gemini-2-5-flash-preview-09-2025,false\no3-mini (high),25.21,/models/o3-mini-high,false\nGrok 3,25.17,/models/grok-3,false\nSeed-OSS-36B-Instruct,25.16,/models/seed-oss-36b-instruct,false\nQwen3 235B 2507,24.96,/models/qwen3-235b-a22b-instruct-2507,false\nQwen3 Coder 480B,24.77,/models/qwen3-coder-480b-a35b-instruct,false\nQwen3 VL 32B,24.72,/models/qwen3-vl-32b-reasoning,false\nSonar Reasoning Pro,24.62112108368442,/models/sonar-reasoning-pro,true\nNova 2.0 Lite (low),24.59,/models/nova-2-0-lite-reasoning-low,false\ngpt-oss-20B (high),24.47,/models/gpt-oss-20b,false\ngpt-oss-120B (low),24.47,/models/gpt-oss-120b-low,false\nMiniMax M1 80k,24.43,/models/minimax-m1-80k,false\nGemini 2.5 Flash,24.29283568571511,/models/gemini-2-5-flash-reasoning-04-2025,true\nNVIDIA Nemotron 3 Nano,24.27,/models/nvidia-nemotron-3-nano-30b-a3b-reasoning,false\nK2 Think V2,24.12,/models/k2-think-v2,false\nLlama Nemotron Super 49B v1.5,23.931245612475355,/models/llama-nemotron-super-49b-v1-5-reasoning,true\nGPT-5 (minimal),23.89,/models/gpt-5-minimal,false\no1-preview,23.742890445410186,/models/o1-preview,true\nHyperCLOVA X SEED Think (32B),23.72,/models/hyperclova-x-seed-think-32b,false\nClaude 4.1 Opus,23.565610165346172,/models/claude-4-1-opus,true\nGrok 4.1 Fast,23.56,/models/grok-4-1-fast,false\nGLM-4.6V,23.42,/models/glm-4-6v-reasoning,false\nK-EXAONE,23.41,/models/k-exaone-non-reasoning,false\nNova 2.0 Omni (low),23.22,/models/nova-2-0-omni-reasoning-low,false\nGLM-4.5-Air,23.17,/models/glm-4-5-air,false\nGrok 4 Fast,23.12,/models/grok-4-fast,false\nNova 2.0 Pro Preview,23.06,/models/nova-2-0-pro,false\nMi:dm K 2.5 Pro,23.06,/models/mi-dm-k-2-5-pro-dec28,false\nGPT-4.1 mini,22.9,/models/gpt-4-1-mini,false\nMistral Large 3,22.8,/models/mistral-large-3,false\nRing-1T,22.78,/models/ring-1t,false\nMagistral Small 1.2,22.54605154658181,/models/magistral-small-2509,true\nQwen3 30B A3B 2507,22.41,/models/qwen3-30b-a3b-2507-reasoning,false\nEXAONE 4.0 32B,22.349745923183455,/models/exaone-4-0-32b-reasoning,true\nDeepSeek V3 0324,22.28,/models/deepseek-v3-0324,false\nClaude 4 Opus,22.171205776538883,/models/claude-4-opus,true\nINTELLECT-3,22.17,/models/intellect-3,false\nGLM-4.7-Flash,22.07,/models/glm-4-7-flash-non-reasoning,false\nDevstral 2,22.04,/models/devstral-2,false\nGPT-5 (ChatGPT),21.834580028932926,/models/gpt-5-chatgpt,true\nHermes 4 405B,21.724919221240167,/models/hermes-4-llama-3-1-405b-reasoning,true\nSolar Open 100B,21.67,/models/solar-open-100b-reasoning,false\nGemini 2.5 Flash-Lite (Sep),21.65,/models/gemini-2-5-flash-lite-preview-09-2025-reasoning,false\nGrok 3 Reasoning Beta,21.647717992955027,/models/grok-3-reasoning,true\nQwen3 VL 32B,21.387891687554983,/models/qwen3-vl-32b-instruct,true\nMistral Medium 3.1,21.25,/models/mistral-medium-3-1,false\nNVIDIA Nemotron Nano 12B v2 VL,21.201670015781215,/models/nvidia-nemotron-nano-12b-v2-vl-reasoning,true\nMiniMax M1 40k,20.856172610675117,/models/minimax-m1-40k,true\nK2-V2 (medium),20.839598542514658,/models/k2-v2-medium,true\nQwen3 Omni 30B A3B,20.834076780566665,/models/qwen3-omni-30b-a3b-reasoning,true\ngpt-oss-20B (low),20.79,/models/gpt-oss-20b-low,false\nQwen3 VL 235B A22B,20.75,/models/qwen3-vl-235b-a22b-instruct,false\nGPT-5 mini (minimal),20.68,/models/gpt-5-mini-minimal,false\nK2-V2 (high),20.61,/models/k2-v2,false\nRing-flash-2.0,20.581651045250553,/models/ring-flash-2-0,true\nGemini 2.5 Flash,20.56,/models/gemini-2-5-flash,false\no1-mini,20.38686032302532,/models/o1-mini,true\nHermes 4 70B,20.386235707526104,/models/hermes-4-llama-3-1-70b-reasoning,true\nQwen3 32B,20.128565048930692,/models/qwen3-32b-instruct-reasoning,true\nQwen3 Next 80B A3B,20.11,/models/qwen3-next-80b-a3b-instruct,false\nLlama Nemotron Ultra,20.02216586108181,/models/llama-3-1-nemotron-ultra-253b-v1-reasoning,true\nQwen3 VL 30B A3B,20.00625456480055,/models/qwen3-vl-30b-a3b-instruct,true\nTri-21B-think Preview,19.99,/models/tri-21b-think-preview,false\nQwen3 Coder 30B A3B,19.98,/models/qwen3-coder-30b-a3b-instruct,false\nGPT-4.5 (Preview),19.956828896300806,/models/gpt-4-5,true\nLing-flash-2.0,19.916329097188502,/models/ling-flash-2-0,true\nQwen3 235B,19.79,/models/qwen3-235b-a22b-instruct-reasoning,false\nQwQ-32B,19.724216680037447,/models/qwq-32b,true\nQwen3 VL 30B A3B,19.68,/models/qwen3-vl-30b-a3b-reasoning,false\nSolar Pro 2,19.62295300802516,/models/solar-pro-2-reasoning,true\nGemini 2.0 Flash Thinking exp. (Jan),19.60282838932838,/models/gemini-2-0-flash-thinking-exp-0121,true\nDevstral Small 2,19.47,/models/devstral-small-2,false\nGemini 2.5 Flash-Lite (Sep),19.42,/models/gemini-2-5-flash-lite-preview-09-2025,false\nGLM-4.5V,19.267407790488182,/models/glm-4-5v-reasoning,true\nQwen3 30B A3B 2507,19.262302603890806,/models/qwen3-30b-a3b-2507,true\nQwen3 30B,19.089528772355642,/models/qwen3-30b-a3b-instruct-reasoning,true\nMotif-2-12.7B,19.08,/models/motif-2-12-7b,false\nLing-1T,19.04,/models/ling-1t,false\nNova Premier,19.01,/models/nova-premier,false\nOlmo 3 32B Think,18.88824843779319,/models/olmo-3-32b-think,true\nDeepSeek R1 (Jan),18.84,/models/deepseek-r1-0120,false\nSolar Pro 2,18.80930200641581,/models/solar-pro-2-preview-reasoning,true\nNVIDIA Nemotron Nano 9B V2,18.788405538471565,/models/nvidia-nemotron-nano-9b-v2,true\nMagistral Medium 1,18.77,/models/magistral-medium,false\nMistral Medium 3,18.76,/models/mistral-medium-3,false\nQwen3 14B,18.753585351905503,/models/qwen3-14b-instruct-reasoning,true\nClaude 3.5 Haiku,18.66,/models/claude-3-5-haiku,false\nGPT-4o (Aug),18.64,/models/gpt-4o-2024-08-06,false\nTri-21B-Think,18.62,/models/tri-21b-think-v0-5,false\nNova 2.0 Lite,18.570604974774277,/models/nova-2-0-lite,true\nGPT-4o (Mar),18.558508422577216,/models/gpt-4o-chatgpt-03-25,true\nGemini 2.0 Flash,18.51,/models/gemini-2-0-flash,false\nLlama 3.3 Nemotron Super 49B,18.49174710104611,/models/llama-3-3-nemotron-super-49b-reasoning,true\nLlama 4 Maverick,18.36,/models/llama-4-maverick,false\nQwen3 4B 2507,18.18,/models/qwen3-4b-2507-instruct-reasoning,false\nGemini 2.0 Pro Experimental,18.052648552336592,/models/gemini-2-0-pro-experimental-02-05,true\nDevstral Small (May),18.03,/models/devstral-small-2505,false\nSonar Reasoning,17.87769906391809,/models/sonar-reasoning,true\nGemini 2.5 Flash,17.844790713497186,/models/gemini-2-5-flash-04-2025,true\nNova 2.0 Omni,17.832719657356495,/models/nova-2-0-omni,true\nHermes 4 405B,17.63,/models/hermes-4-llama-3-1-405b,false\nGemini 2.5 Flash-Lite,17.57,/models/gemini-2-5-flash-lite-reasoning,false\nGPT-4o (Nov),17.32,/models/gpt-4o,false\nERNIE 4.5 300B A47B,17.256259598074,/models/ernie-4-5-300b-a47b,true\nDeepSeek R1 Distill Qwen 32B,17.166714190986596,/models/deepseek-r1-distill-qwen-32b,true\nGLM-4.6V,17.1,/models/glm-4-6v,false\nDeepSeek V3 (Dec),17.06978519415099,/models/deepseek-v3,true\nQwen3 235B,16.96,/models/qwen3-235b-a22b-instruct,false\nOlmo 3 7B Think,16.796070376777948,/models/olmo-3-7b-think,true\nMagistral Small 1,16.791620007828033,/models/magistral-small,true\nGemini 2.0 Flash (exp),16.7744977006624,/models/gemini-2-0-flash-experimental,true\nQwen3 VL 8B,16.66,/models/qwen3-vl-8b-reasoning,false\nK2-V2 (low),16.469907911796557,/models/k2-v2-low,true\nDeepSeek R1 0528 Qwen3 8B,16.43067398648048,/models/deepseek-r1-qwen3-8b,true\nQwen2.5 Max,16.282944203161424,/models/qwen-2-5-max,true\nMinistral 3 14B,16.218700898434303,/models/ministral-3-14b,true\nQwen3 4B 2507,16.137244976777335,/models/qwen3-4b-2507-instruct,true\nEXAONE 4.0 32B,16.128694503015343,/models/exaone-4-0-32b,true\nSolar Pro 2,16.073226748517293,/models/solar-pro-2,true\nQwen3 Omni 30B A3B,16.06045361124703,/models/qwen3-omni-30b-a3b-instruct,true\nGemini 2.5 Flash-Lite,16.009462487620045,/models/gemini-2-5-flash-lite,true\nGemini 1.5 Pro (Sep),15.99411151166208,/models/gemini-1-5-pro,true\nSolar Pro 2 ,15.99411151166208,/models/solar-pro-2-preview,true\nDeepSeek R1 Distill Llama 70B,15.950177423585206,/models/deepseek-r1-distill-llama-70b,true\nClaude 3.5 Sonnet (Oct),15.926898851351913,/models/claude-35-sonnet,true\nDeepSeek R1 Distill Qwen 14B,15.844510353679771,/models/deepseek-r1-distill-qwen-14b,true\nQwen3 14B,15.665329414925122,/models/qwen3-14b-instruct,true\nGPT-5 nano (minimal),15.590783726427158,/models/gpt-5-nano-minimal,true\nQwen2.5 72B,15.557766275790943,/models/qwen2-5-72b-instruct,true\nSonar,15.492770267156052,/models/sonar,true\nQwen3 8B,15.279926185040281,/models/qwen3-8b-instruct-reasoning,true\nMinistral 3 8B,15.251608195023016,/models/ministral-3-8b,true\nSonar Pro,15.225966789192796,/models/sonar-pro,true\nLlama 3.1 405B,15.203185114467049,/models/llama-3-1-instruct-405b,true\nQwQ 32B-Preview,15.173958740020906,/models/QwQ-32B-Preview,true\nDevstral Medium,15.102796916165374,/models/devstral-medium,true\nLing-mini-2.0,15.090794964025125,/models/ling-mini-2-0,true\nMistral Small 3.2,15.07,/models/mistral-small-3-2,false\nGPT-4.1 nano,14.89216078821739,/models/gpt-4-1-nano,true\nDevstral Small,14.840945563179064,/models/devstral-small,true\nQwen3 VL 8B,14.809516085602741,/models/qwen3-vl-8b-instruct,true\nNVIDIA Nemotron Nano 9B V2,14.76,/models/nvidia-nemotron-nano-9b-v2-reasoning,false\nCommand A,14.711727324474086,/models/command-a,true\nGemini 2.0 Flash-Lite (Feb),14.702194584063244,/models/gemini-2-0-flash-lite-001,true\nMistral Large 2 (Nov),14.676681663368385,/models/mistral-large-2,true\nExaone 4.0 1.2B,14.637839944967833,/models/exaone-4-0-1-2b-reasoning,true\nLlama Nemotron Super 49B v1.5,14.610711920891266,/models/llama-nemotron-super-49b-v1-5,true\nQwen3 30B,14.56528895780426,/models/qwen3-30b-a3b-instruct,true\nQwen3 32B,14.532286886975053,/models/qwen3-32b-instruct,true\nGPT-4o (May),14.498529042479218,/models/gpt-4o-2024-05-13,true\nLlama 3.3 70B,14.49,/models/llama-3-3-instruct-70b,false\nGemini 2.0 Flash-Lite (Preview),14.487085601010705,/models/gemini-2-0-flash-lite-preview,true\nLlama 3.1 Nemotron Nano 4B v1.1,14.433728654243223,/models/llama-3-1-nemotron-nano-4b-reasoning,true\nKimi Linear 48B A3B Instruct,14.414576519674707,/models/kimi-linear-48b-a3b-instruct,true\nGLM-4.5V,14.376428460400666,/models/glm-4-5v,true\nReka Flash 3,14.349784905144217,/models/reka-flash-3,true\nLlama 3.3 Nemotron Super 49B,14.34668262331532,/models/llama-3-3-nemotron-super-49b,true\nQwen3 4B,14.220680621459485,/models/qwen3-4b-instruct-reasoning,true\nNVIDIA Nemotron 3 Nano,14.217305970920366,/models/nvidia-nemotron-3-nano-30b-a3b,true\nClaude 3.5 Sonnet (June),14.170308711536471,/models/claude-35-sonnet-june-24,true\nNVIDIA Nemotron Nano 12B v2 VL,14.16465748909028,/models/nvidia-nemotron-nano-12b-v2-vl,true\nTulu3 405B,14.140503152934265,/models/tulu3-405b,true\nGPT-4o (ChatGPT),14.107047509866186,/models/gpt-4o-chatgpt,true\nQwen3 VL 4B,14.078586469580603,/models/qwen3-vl-4b-instruct,true\nNova Pro,14.015305085876673,/models/nova-pro,true\nPixtral Large,14.000187381306647,/models/pixtral-large-2411,true\nMistral Small 3.1,13.974509959335386,/models/mistral-small-3-1,true\nGrok 2,13.886018572918001,/models/grok-2-1212,true\nGemini 1.5 Flash (Sep),13.791318335280737,/models/gemini-1-5-flash,true\nQwen3 VL 4B,13.73,/models/qwen3-vl-4b-reasoning,false\nGPT-4 Turbo,13.715301370059112,/models/gpt-4-turbo,true\nHermes 4 70B,13.551468975216904,/models/hermes-4-llama-3-1-70b,true\nLlama 4 Scout,13.52,/models/llama-4-scout,false\nLlama 3.1 Nemotron 70B,13.508443750240778,/models/llama-3-1-nemotron-instruct-70b,true\nGrok Beta,13.281894010187271,/models/grok-beta,true\nQwen3 8B,13.249634311307114,/models/qwen3-8b-instruct,true\nQwen2.5 Instruct 32B,13.236526421623505,/models/qwen2.5-32b-instruct,true\nGranite 4.0 H Small,13.186584050063225,/models/granite-4-0-h-small,true\nPhi-4,13.183090068262993,/models/phi-4,true\nLlama 3.1 70B,13.134271418700061,/models/llama-3-1-instruct-70b,true\nQwen3 1.7B,13.068306567976027,/models/qwen3-1.7b-instruct-reasoning,true\nMistral Large 2 (Jul),13.033082438627458,/models/mistral-large-2407,true\nOlmo 3 7B,12.988897484456922,/models/olmo-3-7b-instruct,true\nQwen2.5 Coder 32B,12.865568210057065,/models/qwen2-5-coder-32b-instruct,true\nMinistral 3 3B,12.858755650671329,/models/ministral-3-3b,true\nGPT-4,12.754307113238253,/models/gpt-4,true\nNova Lite,12.750174437772086,/models/nova-lite,true\nMistral Small 3,12.667127328098278,/models/mistral-small-3,true\nGPT-4o mini,12.647336534486156,/models/gpt-4o-mini,true\nJamba Reasoning 3B,12.565688353543328,/models/jamba-reasoning-3b,true\nJamba 1.7 Large,12.519156020602248,/models/jamba-1-7-large,true\nDeepSeek-V2.5 (Dec),12.511590737265823,/models/deepseek-v2-5,true\nQwen3 4B,12.49184791077126,/models/qwen3-4b-instruct,true\nClaude 3 Opus,12.452455969031023,/models/claude-3-opus,true\nExaone 4.0 1.2B,12.423315047214832,/models/exaone-4-0-1-2b,true\nGemma 3 12B,12.380590362401561,/models/gemma-3-12b,true\nGemini 2.0 Flash Thinking exp. (Dec),12.331778143209375,/models/gemini-2-0-flash-thinking-exp-1219,true\nDeepSeek-V2.5,12.3252882665075,/models/deepseek-v2-5-sep-2024,true\nMistral Saba,12.128983524455348,/models/mistral-saba,true\nDeepSeek R1 Distill Llama 8B,12.10028639307147,/models/deepseek-r1-distill-llama-8b,true\nGemini 1.5 Pro (May),11.995643433356523,/models/gemini-1-5-pro-may-2024,true\nR1 1776,11.989331081091219,/models/r1-1776,true\nQwen2.5 Turbo,11.973563045943266,/models/qwen-turbo,true\nReka Flash,11.967262408951813,/models/reka-flash,true\nLlama 3.2 90B (Vision),11.90129896307905,/models/llama-3-2-instruct-90b-vision,true\nSolar Mini,11.901298628500898,/models/solar-mini,true\nGrok-1,11.690189040378796,/models/grok-1,true\nQwen2 72B,11.662530563545237,/models/qwen2-72b-instruct,true\nNova Micro,11.550167660288439,/models/nova-micro,true\nLFM2 8B A1B,11.446568066689569,/models/lfm2-8b-a1b,true\nLlama 3.1 8B,11.301889825217446,/models/llama-3-1-instruct-8b,true\nGemini 1.5 Flash-8B,11.131677468292837,/models/gemini-1-5-flash-8b,true\nGranite 4.0 Micro,11.09417230526621,/models/granite-4-0-micro,true\nPhi-4 Mini,10.939443779805618,/models/phi-4-mini,true\nDeepHermes 3 - Mistral 24B,10.888270503865678,/models/deephermes-3-mistral-24b-preview,true\nLlama 3.2 11B (Vision),10.887386887368143,/models/llama-3-2-instruct-11b-vision,true\nGemma 3n E4B,10.884502112092596,/models/gemma-3n-e4b,true\nGranite 3.3 8B,10.789732126843495,/models/granite-3-3-8b-instruct,true\nJamba 1.5 Large,10.695130465051701,/models/jamba-1-5-large,true\nJamba 1.7 Mini,10.684455764128451,/models/jamba-1-7-mini,true\nGemma 3 4B,10.653355925612512,/models/gemma-3-4b,true\nHermes 3 - Llama-3.1 70B,10.647383158323558,/models/hermes-3-llama-3-1-70b,true\nDeepSeek-Coder-V2,10.608221945579603,/models/deepseek-coder-v2,true\nQwen3 1.7B,10.580206592118074,/models/qwen3-1.7b-instruct,true\nOLMo 2 32B,10.566197101720523,/models/olmo-2-32b,true\nJamba 1.6 Large,10.555304867718469,/models/jamba-1-6-large,true\nQwen3 0.6B,10.504775390875873,/models/qwen3-0.6b-instruct-reasoning,true\nLFM2 24B A2B,10.49,/models/lfm2-24b-a2b,false\nGemini 1.5 Flash (May),10.46126910425808,/models/gemini-1-5-flash-may-2024,true\nGranite 4.0 H 1B,10.385752893854013,/models/granite-4-0-h-nano-1b,true\nGemma 3 27B,10.31,/models/gemma-3-27b,false\nClaude 3 Sonnet,10.270295671896926,/models/claude-3-sonnet,true\nGranite 4.0 1B,10.262638320202154,/models/granite-4-0-nano-1b,true\nLlama 3 70B,10.192203109851526,/models/llama-3-instruct-70b,true\nMistral Small (Sep),10.178799016274557,/models/mistral-small,true\nGemini 1.0 Ultra,10.146700927796493,/models/gemini-1-0-ultra,true\nPhi-3 Mini,10.10075277153229,/models/phi-3-mini,true\nGemma 3n E4B (May),10.058952665396129,/models/gemma-3n-e4b-preview-0520,true\nPhi-4 Multimodal,10.04043714573622,/models/phi-4-multimodal,true\nQwen2.5 Coder 7B ,9.982467098648165,/models/qwen2-5-coder-7b-instruct,true\nMistral Large (Feb),9.90917085347575,/models/mistral-large,true\nLFM2 2.6B,9.8783940490465,/models/lfm2-2-6b,true\nMixtral 8x22B,9.844182670676119,/models/mistral-8x22b-instruct,true\nLlama 2 Chat 7B,9.738523435238086,/models/llama-2-chat-7b,true\nGemma 3n E2B,9.727723997995453,/models/gemma-3n-e2b,true\nLlama 3.2 3B,9.702177369527044,/models/llama-3-2-instruct-3b,true\nQwen3 0.6B,9.653252960548699,/models/qwen3-0.6b-instruct,true\nQwen1.5 Chat 110B,9.5481702885671,/models/qwen1.5-110b-chat,true\nLFM2 1.2B,9.328833244924207,/models/lfm2-1-2b,true\nClaude 2.1,9.324651438021968,/models/claude-21,true\nClaude 3 Haiku,9.300141268649064,/models/claude-3-haiku,true\nOLMo 2 7B,9.296750708294477,/models/olmo-2-7b,true\nMolmo 7B-D,9.247608272030813,/models/molmo-7b-d,true\nLlama 3.2 1B,9.13072365693697,/models/llama-3-2-instruct-1b,true\nDeepSeek R1 Distill Qwen 1.5B,9.075326286181491,/models/deepseek-r1-distill-qwen-1-5b,true\nClaude 2.0,9.058555199878974,/models/claude-2,true\nDeepSeek-V2,9.058555199878974,/models/deepseek-v2,true\nMistral Small (Feb),9.039501606668157,/models/mistral-small-2402,true\nMistral Medium,9.010996334814534,/models/mistral-medium,true\nGPT-3.5 Turbo,8.989676386007632,/models/gpt-35-turbo,true\nGranite 4.0 H 350M,8.970038363283287,/models/granite-4-0-h-350m,true\nGranite 4.0 350M,8.844934770956986,/models/granite-4-0-350m,true\nArctic,8.823244716278813,/models/arctic-instruct,true\nQwen Chat 72B,8.823244716278813,/models/qwen-chat-72b,true\nLFM 40B,8.760765585157351,/models/lfm-40b,true\nLlama 3 8B,8.701018683736894,/models/llama-3-instruct-8b,true\nGemma 3 1B,8.647929714252196,/models/gemma-3-1b,true\nPALM-2,8.594046799469973,/models/palm-2,true\nGemini 1.0 Pro,8.501805478425053,/models/gemini-1-0-pro,true\nDeepSeek Coder V2 Lite,8.479458137084212,/models/deepseek-coder-v2-lite,true\nGemma 3 270M,8.372813365251153,/models/gemma-3-270m,true\nLlama 2 Chat 70B,8.370802665737399,/models/llama-2-chat-70b,true\nDeepSeek LLM 67B (V1),8.370802665737399,/models/deepseek-llm-67b-chat,true\nLlama 2 Chat 13B,8.35759395005398,/models/llama-2-chat-13b,true\nCommand-R+ (Apr),8.348799720910122,/models/command-r-plus-04-2024,true\nOpenChat 3.5,8.320282434218926,/models/openchat-35,true\nDBRX,8.315903697714282,/models/dbrx,true\nJamba 1.5 Mini,8.027724014032357,/models/jamba-1-5-mini,true\nJamba 1.6 Mini,7.870810849851021,/models/jamba-1-6-mini,true\nMixtral 8x7B,7.731195590246845,/models/mixtral-8x7b-instruct,true\nDeepHermes 3 - Llama-3.1 8B,7.578083680871105,/models/deephermes-3-llama-3-1-8b-preview,true\nLlama 65B,7.413887665229345,/models/llama-65b,true\nQwen Chat 14B,7.413887665229345,/models/qwen-chat-14b,true\nClaude Instant,7.413887665229345,/models/claude-instant,true\nMistral 7B,7.413887665229345,/models/mistral-7b-instruct,true\nCommand-R (Mar),7.413887665229345,/models/command-r-03-2024,true"}
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index by Open Weights / Proprietary","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index: Includes GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt evaluations spanning reasoning, knowledge, math & coding; Evaluation results measured independently by Artificial Analysis","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://artificialanalysis.ai/docs/legal/Terms-of-Use.pdf","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Intelligence vs. Cost to Run Artificial Analysis Intelligence Index
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Image & Video Leaderboards
Top models from our Image Arena and Video Arena leaderboards, with 95% confidence intervals
Text to Image Leaderboard
Frontier Language Model Intelligence, Over Time
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Intelligence Evaluations
While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
AA-Omniscience New grader
AA-Omniscience is a knowledge and hallucination benchmark that rewards accuracy, punishes bad guesses and provides a comprehensive view of which models produce factually reliable outputs across different domains
AA-Omniscience Index
AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.
{"@context":"https://schema.org","@type":"Dataset","name":"AA-Omniscience Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://artificialanalysis.ai/docs/legal/Terms-of-Use.pdf","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":"modelName,omniscienceIndex,detailsUrl,isLabClaimedValue\nGemini 3.1 Pro Preview,32.933,/models/gemini-3-1-pro-preview/providers,false\nGemini 3 Pro Preview (high),15.8,/models/gemini-3-pro/providers,false\nClaude Opus 4.6 (max),13.5,/models/claude-opus-4-6-adaptive/providers,false\nClaude Opus 4.5,13.267,/models/claude-opus-4-5-thinking/providers,false\nClaude Sonnet 4.6 (max),12.367,/models/claude-sonnet-4-6-adaptive/providers,false\nGemini 3 Flash,11.567,/models/gemini-3-flash-reasoning/providers,false\nGPT-5.3 Codex (xhigh),9.883,/models/gpt-5-3-codex/providers,false\nGPT-5.1 (high),5.55,/models/gpt-5-1/providers,false\nGrok 4,3.75,/models/grok-4/providers,false\nClaude Opus 4.6,3.467,/models/claude-opus-4-6/providers,false\nGemini 3 Pro Preview (low),2.15,/models/gemini-3-pro-low/providers,false\nGLM-5,2,/models/glm-5/providers,false\nClaude 4.5 Sonnet,0.333,/models/claude-4-5-sonnet-thinking/providers,false\nClaude 3.7 Sonnet,0.317,/models/claude-3-7-sonnet-thinking/providers,false\nClaude 4 Sonnet,0.183,/models/claude-4-sonnet-thinking/providers,false\nGPT-5.2 (medium),0.017,/models/gpt-5-2-medium/providers,false\nGPT-5.2 (xhigh),-1,/models/gpt-5-2/providers,false\n\"Claude Sonnet 4.6 (Non-reasoning, Low Effort)\",-1.9,/models/claude-sonnet-4-6-non-reasoning-low-effort/providers,false\nGPT-5.2 Codex (xhigh),-2.483,/models/gpt-5-2-codex/providers,false\nClaude Sonnet 4.6,-2.933,/models/claude-sonnet-4-6/providers,false\nGemini 3 Flash,-3.617,/models/gemini-3-flash/providers,false\nClaude Opus 4.5,-3.95,/models/claude-opus-4-5/providers,false\nClaude 4.5 Haiku,-4.217,/models/claude-4-5-haiku-reasoning/providers,false\nGPT-5.1 Codex (high),-6.017,/models/gpt-5-1-codex/providers,false\nGrok 3 mini Reasoning (high),-6.05,/models/grok-3-mini-reasoning/providers,false\nGPT-5 Codex (high),-6.767,/models/gpt-5-codex/providers,false\nClaude 4.5 Haiku,-7.667,/models/claude-4-5-haiku/providers,false\nGPT-5 (high),-8.083,/models/gpt-5/providers,false\nKimi K2.5,-8.117,/models/kimi-k2-5/providers,false\nClaude 4.5 Sonnet,-9.2,/models/claude-4-5-sonnet/providers,false\nClaude 4 Sonnet,-9.233,/models/claude-4-sonnet/providers,false\nClaude 3.7 Sonnet,-9.233,/models/claude-3-7-sonnet/providers,false\nGPT-5 (low),-10.017,/models/gpt-5-low/providers,false\nGPT-5 (medium),-10.083,/models/gpt-5-medium/providers,false\no1,-10.55,/models/o1/providers,false\nGPT-4o (Nov),-10.75,/models/gpt-4o/providers,false\nGPT-5 mini (medium),-10.783,/models/gpt-5-mini-medium/providers,false\nGLM-5,-11.233,/models/glm-5-non-reasoning/providers,false\nGPT-5.2,-13.717,/models/gpt-5-2-non-reasoning/providers,false\nKimi K2.5,-14,/models/kimi-k2-5-non-reasoning/providers,false\nGemini 2.5 Pro,-14.3,/models/gemini-2-5-pro/providers,false\no3,-15.267,/models/o3/providers,false\nGPT-5.1 Codex mini (high),-16.517,/models/gpt-5-1-codex-mini/providers,false\nGPT-5 mini (high),-17.183,/models/gpt-5-mini/providers,false\nMiMo-V2-Flash (Feb 2026),-18.467,/models/mimo-v2-0206/providers,false\nKimi K2 Thinking,-20.483,/models/kimi-k2-thinking/providers,false\nDeepSeek V3.2,-20.883,/models/deepseek-v3-2-reasoning/providers,false\nGPT-4o (Aug),-21.717,/models/gpt-4o-2024-08-06/providers,false\nQwen3 Coder 480B,-22.2,/models/qwen3-coder-480b-a35b-instruct/providers,false\nClaude 3.5 Haiku,-23.183,/models/claude-3-5-haiku/providers,false\nDeepSeek V3.1 Terminus,-24.317,/models/deepseek-v3-1-terminus-reasoning/providers,false\nGPT-5 nano (medium),-24.883,/models/gpt-5-nano-medium/providers,false\nGLM-4.5,-25.45,/models/glm-4.5/providers,false\nMagistral Medium 1.2,-26.233,/models/magistral-medium-2509/providers,false\nMagistral Medium 1,-26.417,/models/magistral-medium/providers,false\nKimi K2 0905,-26.467,/models/kimi-k2-0905/providers,false\nGLM-4.6V,-26.483,/models/glm-4-6v-reasoning/providers,false\nDeepSeek R1 0528,-27.083,/models/deepseek-r1/providers,false\nKimi K2,-27.517,/models/kimi-k2/providers,false\nGPT-5 nano (high),-27.75,/models/gpt-5-nano/providers,false\nGemini 2.5 Flash,-27.85,/models/gemini-2-5-flash-reasoning/providers,false\nDeepSeek V3.1,-28.367,/models/deepseek-v3-1-reasoning/providers,false\nGrok 4 Fast,-28.4,/models/grok-4-fast-reasoning/providers,false\nGrok 4.1 Fast,-28.733,/models/grok-4-1-fast-reasoning/providers,false\nQwen3.5 397B A17B,-29.783,/models/qwen3-5-397b-a17b/providers,false\nDeepSeek V3.2 Exp,-29.933,/models/deepseek-v3-2-reasoning-0925/providers,false\nDeepSeek R1 (Jan),-31.317,/models/deepseek-r1-0120/providers,false\nMistral Medium 3,-31.467,/models/mistral-medium-3/providers,false\nGLM-4.6,-31.6,/models/glm-4-6/providers,false\nGrok 3,-32.333,/models/grok-3/providers,false\nMiniMax-M2.1,-32.833,/models/minimax-m2-1/providers,false\nHermes 4 405B,-33,/models/hermes-4-llama-3-1-405b/providers,false\nK2 Think V2,-33.917,/models/k2-think-v2/providers,false\nGPT-5 (minimal),-33.95,/models/gpt-5-minimal/providers,false\nQwen3 Max Thinking,-34.417,/models/qwen3-max-thinking/providers,false\nDoubao Seed Code,-34.433,/models/doubao-seed-code/providers,false\nGLM-4.7,-34.6,/models/glm-4-7/providers,false\nGemini 2.5 Flash (Sep),-34.717,/models/gemini-2-5-flash-preview-09-2025-reasoning/providers,false\nGPT-5.1,-34.95,/models/gpt-5-1-non-reasoning/providers,false\no4-mini (high),-35.75,/models/o4-mini/providers,false\nGrok Code Fast 1,-36,/models/grok-code-fast-1/providers,false\nQwen3.5 397B A17B,-36.083,/models/qwen3-5-397b-a17b-non-reasoning/providers,false\nGPT-4.1,-36.183,/models/gpt-4-1/providers,false\nNova Premier,-36.267,/models/nova-premier/providers,false\nKAT-Coder-Pro V1,-37.433,/models/kat-coder-pro-v1/providers,false\nDeepSeek V3 0324,-37.567,/models/deepseek-v3-0324/providers,false\nQwen3 Max Thinking (Preview),-37.833,/models/qwen3-max-thinking-preview/providers,false\nGLM-4.6V,-37.867,/models/glm-4-6v/providers,false\nGemini 2.5 Flash (Sep),-38.117,/models/gemini-2-5-flash-preview-09-2025/providers,false\nMistral Large 3,-39.433,/models/mistral-large-3/providers,false\nQwen3.5 122B A10B,-39.583,/models/qwen3-5-122b-a10b/providers,false\nMiniMax-M2.5,-39.7,/models/minimax-m2-5/providers,false\nQwen3 Max (Preview),-40.583,/models/qwen3-max-preview/providers,false\nDeepSeek V3.1,-41.067,/models/deepseek-v3-1/providers,false\no3-mini (high),-41.333,/models/o3-mini-high/providers,false\nGLM-4.6,-41.733,/models/glm-4-6-reasoning/providers,false\nLlama 4 Maverick,-41.767,/models/llama-4-maverick/providers,false\nDeepSeek V3.1 Terminus,-41.883,/models/deepseek-v3-1-terminus/providers,false\nQwen3.5 27B,-42.017,/models/qwen3-5-27b/providers,false\nGemini 2.5 Flash,-42.017,/models/gemini-2-5-flash/providers,false\nQwen3 235B 2507,-42.1,/models/qwen3-235b-a22b-instruct-2507/providers,false\nGemini 2.0 Flash,-42.55,/models/gemini-2-0-flash/providers,false\nGemini 2.5 Flash-Lite (Sep),-42.667,/models/gemini-2-5-flash-lite-preview-09-2025/providers,false\nNVIDIA Nemotron Nano 9B V2,-42.867,/models/nvidia-nemotron-nano-9b-v2-reasoning/providers,false\nQwen3 Max,-43.117,/models/qwen3-max/providers,false\nMiMo-V2-Flash,-43.45,/models/mimo-v2-flash-reasoning/providers,false\nERNIE 5.0 Thinking Preview,-44.133,/models/ernie-5-0-thinking-preview/providers,false\nQwen3 235B,-44.583,/models/qwen3-235b-a22b-instruct-reasoning/providers,false\nQwen3 VL 235B A22B,-44.617,/models/qwen3-vl-235b-a22b-reasoning/providers,false\nGemini 2.5 Flash-Lite,-44.633,/models/gemini-2-5-flash-lite-reasoning/providers,false\nMistral Medium 3.1,-45.467,/models/mistral-medium-3-1/providers,false\nQwen3 235B A22B 2507,-45.55,/models/qwen3-235b-a22b-instruct-2507-reasoning/providers,false\nNova 2.0 Pro Preview (low),-45.767,/models/nova-2-0-pro-reasoning-low/providers,false\nDevstral 2,-46.167,/models/devstral-2/providers,false\nGLM-4.7,-46.283,/models/glm-4-7-non-reasoning/providers,false\nQwen3.5 35B A3B,-46.383,/models/qwen3-5-35b-a3b/providers,false\nDeepSeek V3.2,-46.733,/models/deepseek-v3-2/providers,false\nMiniMax-M2,-46.933,/models/minimax-m2/providers,false\nDeepSeek V3.2 Exp,-47.083,/models/deepseek-v3-2-0925/providers,false\nMiniMax M1 80k,-47.417,/models/minimax-m1-80k/providers,false\nRing-1T,-47.867,/models/ring-1t/providers,false\nNova 2.0 Pro Preview (medium),-48.05,/models/nova-2-0-pro-reasoning-medium/providers,false\nNova 2.0 Pro Preview,-48.15,/models/nova-2-0-pro/providers,false\nMiMo-V2-Flash,-48.483,/models/mimo-v2-flash/providers,false\nGrok 4 Fast,-49.867,/models/grok-4-fast/providers,false\nNova 2.0 Omni (low),-49.983,/models/nova-2-0-omni-reasoning-low/providers,false\ngpt-oss-120B (high),-50.05,/models/gpt-oss-120b/providers,false\nGPT-4.1 mini,-50.133,/models/gpt-4-1-mini/providers,false\ngpt-oss-120B (low),-50.15,/models/gpt-oss-120b-low/providers,false\nQwen3 VL 32B,-50.2,/models/qwen3-vl-32b-reasoning/providers,false\nMistral Small 3.2,-50.417,/models/mistral-small-3-2/providers,false\nQwen3 Next 80B A3B,-50.567,/models/qwen3-next-80b-a3b-reasoning/providers,false\nQwen3 VL 235B A22B,-50.667,/models/qwen3-vl-235b-a22b-instruct/providers,false\nQwen3 Coder 30B A3B,-50.85,/models/qwen3-coder-30b-a3b-instruct/providers,false\nSeed-OSS-36B-Instruct,-50.85,/models/seed-oss-36b-instruct/providers,false\nGrok 4.1 Fast,-50.95,/models/grok-4-1-fast/providers,false\nINTELLECT-3,-51.017,/models/intellect-3/providers,false\nNova 2.0 Lite (low),-51.067,/models/nova-2-0-lite-reasoning-low/providers,false\nNVIDIA Nemotron 3 Nano,-51.633,/models/nvidia-nemotron-3-nano-30b-a3b-reasoning/providers,false\nLlama 3.3 70B,-51.917,/models/llama-3-3-instruct-70b/providers,false\nMercury 2,-52.283,/models/mercury-2/providers,false\nLlama 4 Scout,-52.367,/models/llama-4-scout/providers,false\nQwen3 VL 8B,-52.417,/models/qwen3-vl-8b-reasoning/providers,false\nHyperCLOVA X SEED Think (32B),-52.867,/models/hyperclova-x-seed-think-32b/providers,false\nQwen3 235B,-52.967,/models/qwen3-235b-a22b-instruct/providers,false\nGemini 2.5 Flash-Lite (Sep),-53.083,/models/gemini-2-5-flash-lite-preview-09-2025-reasoning/providers,false\nSolar Open 100B,-54.1,/models/solar-open-100b-reasoning/providers,false\nGPT-5 mini (minimal),-54.117,/models/gpt-5-mini-minimal/providers,false\nTri-21B-think Preview,-55.283,/models/tri-21b-think-preview/providers,false\nLing-1T,-55.483,/models/ling-1t/providers,false\nNova 2.0 Lite (medium),-55.617,/models/nova-2-0-lite-reasoning-medium/providers,false\nK2-V2 (high),-56.7,/models/k2-v2/providers,false\nDevstral Small 2,-56.733,/models/devstral-small-2/providers,false\nQwen3 30B A3B 2507,-56.867,/models/qwen3-30b-a3b-2507-reasoning/providers,false\nDevstral Small (May),-56.983,/models/devstral-small-2505/providers,false\nMi:dm K 2.5 Pro,-57.267,/models/mi-dm-k-2-5-pro-dec28/providers,false\nQwen3 VL 30B A3B,-57.317,/models/qwen3-vl-30b-a3b-reasoning/providers,false\nNova 2.0 Omni (medium),-57.833,/models/nova-2-0-omni-reasoning-medium/providers,false\nK-EXAONE,-57.933,/models/k-exaone/providers,false\nApriel-v1.6-15B-Thinker,-58.533,/models/apriel-v1-6-15b-thinker/providers,false\nLFM2 24B A2B,-59.067,/models/lfm2-24b-a2b/providers,false\nQwen3 Next 80B A3B,-59.1,/models/qwen3-next-80b-a3b-instruct/providers,false\nGLM-4.7-Flash,-59.333,/models/glm-4-7-flash/providers,false\ngpt-oss-20B (low),-60.133,/models/gpt-oss-20b-low/providers,false\nQwen3 Coder Next,-60.717,/models/qwen3-coder-next/providers,false\nQwen3 4B 2507,-60.817,/models/qwen3-4b-2507-instruct-reasoning/providers,false\nK-EXAONE,-61.167,/models/k-exaone-non-reasoning/providers,false\nMotif-2-12.7B,-61.183,/models/motif-2-12-7b/providers,false\nGLM-4.7-Flash,-62.233,/models/glm-4-7-flash-non-reasoning/providers,false\nGLM-4.5-Air,-62.517,/models/glm-4-5-air/providers,false\nTri-21B-Think,-63.3,/models/tri-21b-think-v0-5/providers,false\ngpt-oss-20B (high),-63.917,/models/gpt-oss-20b/providers,false\nGemma 3 27B,-65.883,/models/gemma-3-27b/providers,false\nQwen3 VL 4B,-69.35,/models/qwen3-vl-4b-reasoning/providers,false"}
GDPval-AA
GDPval-AA evaluates AI models on real-world, economically valuable tasks across a wide range of occupations
GDPval-AA Leaderboard
Artificial Analysis Openness Index
Artificial Analysis Openness Index assesses how 'open' models are on the basis of their availability and transparency across different components.
Artificial Analysis Openness Index: Components
Artificial Analysis Openness Index vs. Artificial Analysis Intelligence Index
Output Tokens
Output tokens of leading AI models based on our independent evaluations
Output Tokens Used to Run Artificial Analysis Intelligence Index
The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
Cost Efficiency
Cost of leading AI models based on our independent evaluations
Cost to Run Artificial Analysis Intelligence Index
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Speed & Latency
Comparison of first-party API performance
Output Speed
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
{"@context":"https://schema.org","@type":"Dataset","name":"Output Speed","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Output speed measures tokens generated per second after the first token is received. Higher values mean faster model output and higher throughput under comparable conditions.","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://artificialanalysis.ai/docs/legal/Terms-of-Use.pdf","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Price
Price of leading AI models based on our independent evaluations
Pricing: Input and Output Prices
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Comprehensive benchmarking of GPUs for language model inference
Compare leading Text to Video and Image to Video models
Compare leading Image Generation and Image Editing models
Compare leading Text to Speech models
API Provider Performance
gpt-oss-120B (high)
gpt-oss-20B (high)
GPT-5.2 Codex (xhigh)
GPT-5.2 (xhigh)
GPT-5.3 Codex (xhigh)
Llama 4 Maverick
Gemini 3.1 Pro Preview
Gemini 3 Flash
Claude Opus 4.6 (max)
Claude Opus 4.6
Claude Sonnet 4.6 (max)
Claude 4.5 Haiku
Mistral Large 3DeepSeek V3.2
Grok 4.1 Fast
Grok 4
Nova 2.0 Pro Preview (medium)
MiniMax-M2.5
NVIDIA Nemotron 3 Nano
Kimi K2.5
K-EXAONEMiMo-V2-Flash (Feb 2026)
KAT-Coder-Pro V1
K2 Think V2
GLM-5
Qwen3.5 397B A17B
Output Speed vs. Price: gpt-oss-120B (high)
Smaller, emerging providers are offering high output speed and at competitive prices.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.
Pricing (Input and Output Prices): gpt-oss-120B (high)
The relative importance of input vs. output token prices varies by use case. E.g. Generation tasks are typically more output token weighted while document processing tasks are more input token weighted.
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Output Speed: gpt-oss-120B (high)
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).