Comparison and analysis of AI models across key performance metrics including quality, price, output speed, latency, context window & others. Click on any model to see detailed metrics. For more details including relating to our methodology, see our FAQs.
Model Comparison Summary
| Model Name | Creator | License | Context Window | Further analysis |
|---|---|---|---|---|
OpenAI | Open | 131k |
Models compared: OpenAI: GPT 4o Audio, GPT 4o Realtime, GPT 4o Speech Pipeline, GPT Realtime, GPT Realtime Mini (Oct '25), GPT-3.5 Turbo, GPT-3.5 Turbo (0125), GPT-3.5 Turbo (0301), GPT-3.5 Turbo (0613), GPT-3.5 Turbo (1106), GPT-3.5 Turbo Instruct, GPT-4, GPT-4 Turbo, GPT-4 Turbo (0125), GPT-4 Turbo (1106), GPT-4 Vision, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano, GPT-4.5 (Preview), GPT-4o (Apr), GPT-4o (Aug), GPT-4o (ChatGPT), GPT-4o (Mar), GPT-4o (May), GPT-4o (Nov), GPT-4o Realtime (Dec), GPT-4o mini, GPT-4o mini Realtime (Dec), GPT-5 (ChatGPT), GPT-5 (high), GPT-5 (low), GPT-5 (medium), GPT-5 (minimal), GPT-5 Codex (high), GPT-5 Pro (high), GPT-5 mini (high), GPT-5 mini (medium), GPT-5 mini (minimal), GPT-5 nano (high), GPT-5 nano (medium), GPT-5 nano (minimal), GPT-5.1, GPT-5.1 (high), GPT-5.1 Codex (high), GPT-5.1 Codex mini (high), GPT-5.2, GPT-5.2 (xhigh), gpt-oss-120B (high), gpt-oss-120B (low), gpt-oss-20B (high), gpt-oss-20B (low), o1, o1-mini, o1-preview, o1-pro, o3, o3-mini, o3-mini (high), o3-pro, and o4-mini (high), Meta: Code Llama 70B, Llama 2 Chat 13B, Llama 2 Chat 70B, Llama 2 Chat 7B, Llama 3 70B, Llama 3 8B, Llama 3.1 405B, Llama 3.1 70B, Llama 3.1 8B, Llama 3.2 11B (Vision), Llama 3.2 1B, Llama 3.2 3B, Llama 3.2 90B (Vision), Llama 3.3 70B, Llama 4 Behemoth, Llama 4 Maverick, Llama 4 Scout, and Llama 65B, Google: Gemini 1.0 Pro, Gemini 1.0 Ultra, Gemini 1.5 Flash (May), Gemini 1.5 Flash (Sep), Gemini 1.5 Flash-8B, Gemini 1.5 Pro (May), Gemini 1.5 Pro (Sep), Gemini 2.0 Flash, Gemini 2.0 Flash (exp), Gemini 2.0 Flash Thinking exp. (Dec), Gemini 2.0 Flash Thinking exp. (Jan), Gemini 2.0 Flash-Lite (Feb), Gemini 2.0 Flash-Lite (Preview), Gemini 2.0 Pro Experimental, Gemini 2.5 Flash, Gemini 2.5 Flash Live Preview, Gemini 2.5 Flash Native Audio, Gemini 2.5 Flash Native Audio Dialog, Gemini 2.5 Flash (Sep), Gemini 2.5 Flash-Lite, Gemini 2.5 Flash-Lite (Sep), Gemini 2.5 Pro, Gemini 2.5 Pro (Mar), Gemini 2.5 Pro (May), Gemini 3 Pro Preview (high), Gemini 3 Pro Preview (low), Gemini Experimental (Nov), Gemma 2 27B, Gemma 2 2B, Gemma 2 9B, Gemma 3 12B, Gemma 3 1B, Gemma 3 270M, Gemma 3 27B, Gemma 3 4B, Gemma 3n E2B, Gemma 3n E4B, Gemma 3n E4B (May), Gemma 7B, PALM-2, Whisperwind, fiercefalcon (Non-reasoning), and fiercefalcon (Reasoning), Anthropic: Claude 2.0, Claude 2.1, Claude 3 Haiku, Claude 3 Opus, Claude 3 Sonnet, Claude 3.5 Haiku, Claude 3.5 Sonnet (June), Claude 3.5 Sonnet (Oct), Claude 3.7 Sonnet, Claude 4 Opus, Claude 4 Sonnet, Claude 4.1 Opus, Claude 4.5 Haiku, Claude 4.5 Sonnet, Claude Instant, and Claude Opus 4.5, Mistral: Codestral (Jan), Codestral (May), Codestral-Mamba, Devstral 2, Devstral Medium, Devstral Small, Devstral Small (May), Devstral Small 2, Magistral Medium 1, Magistral Medium 1.1, Magistral Medium 1.2, Magistral Small 1, Magistral Small 1.1, Magistral Small 1.2, Ministral 14B (Dec '25), Ministral 3B, Ministral 3B (Dec '25), Ministral 8B, Ministral 8B (Dec '25), Mistral 7B, Mistral Large (Feb), Mistral Large 2 (Jul), Mistral Large 2 (Nov), Mistral Large 3, Mistral Medium, Mistral Medium 3, Mistral Medium 3.1, Mistral NeMo, Mistral Saba, Mistral Small (Feb), Mistral Small (Sep), Mistral Small 3, Mistral Small 3.1, Mistral Small 3.2, Mixtral 8x22B, Mixtral 8x7B, Pixtral 12B, and Pixtral Large, DeepSeek: DeepSeek Coder V2 Lite, DeepSeek LLM 67B (V1), DeepSeek Prover V2 671B, DeepSeek R1 (FP4), DeepSeek R1 (Jan), DeepSeek R1 0528, DeepSeek R1 0528 Qwen3 8B, DeepSeek R1 Distill Llama 70B, DeepSeek R1 Distill Llama 8B, DeepSeek R1 Distill Qwen 1.5B, DeepSeek R1 Distill Qwen 14B, DeepSeek R1 Distill Qwen 32B, DeepSeek V3 (Dec), DeepSeek V3 0324, DeepSeek V3.1, DeepSeek V3.1 Terminus, DeepSeek V3.2, DeepSeek V3.2 Exp, DeepSeek V3.2 Speciale, DeepSeek-Coder-V2, DeepSeek-OCR, DeepSeek-V2, DeepSeek-V2.5, DeepSeek-V2.5 (Dec), DeepSeek-VL2, and Janus Pro 7B, Perplexity: PPLX-70B Online, PPLX-7B-Online, R1 1776, Sonar, Sonar 3.1 Huge, Sonar 3.1 Large, Sonar 3.1 Small , Sonar Large, Sonar Pro, Sonar Reasoning, Sonar Reasoning Pro, and Sonar Small, xAI: Grok 2, Grok 3, Grok 3 Reasoning Beta, Grok 3 mini, Grok 3 mini Reasoning (low), Grok 3 mini Reasoning (high), Grok 4, Grok 4 Fast, Grok 4 Fast 1111 (Reasoning), Grok 4 mini (0908), Grok 4.1 Fast, Grok 4.1 Fast v4, Grok Beta, Grok Code Fast 1, Grok Voice, Grok-1, and test model, OpenChat: OpenChat 3.5, Amazon: Nova 2.0 Lite, Nova 2.0 Lite (high), Nova 2.0 Lite (low), Nova 2.0 Lite (medium), Nova 2.0 Omni, Nova 2.0 Omni (high), Nova 2.0 Omni (low), Nova 2.0 Omni (medium), Nova 2.0 Pro Preview, Nova 2.0 Pro Preview (high), Nova 2.0 Pro Preview (low), Nova 2.0 Pro Preview (medium), Nova 2.0 Realtime, Nova 2.0 Sonic, Nova Lite, Nova Micro, Nova Premier, and Nova Pro, Microsoft Azure: Phi-3 Medium 14B, Phi-3 Mini, Phi-4, Phi-4 Mini, Phi-4 Multimodal, Phi-4 mini reasoning, Phi-4 reasoning, Phi-4 reasoning plus, Yosemite-1-1, Yosemite-1-1-d36, Yosemite 1.1 d36 Updated, Yosemite-1-1-d64, Yosemite 1.1 d64 Updated, and Yosemite, Liquid AI: LFM 1.3B, LFM 3B, LFM 40B, LFM2 1.2B, LFM2 2.6B, and LFM2 8B A1B, Upstage: Solar Mini, Solar Pro, Solar Pro (Nov), Solar Pro 2, and Solar Pro 2 , Databricks: DBRX, MiniMax: MiniMax M1 40k, MiniMax M1 80k, MiniMax-M2, and MiniMax-Text-01, NVIDIA: Cosmos Nemotron 34B, Llama 3.1 Nemotron 70B, Llama 3.1 Nemotron Nano 4B v1.1, Llama 3.1 Nemotron Nano 8B, Llama 3.3 Nemotron Nano 8B, Llama Nemotron Ultra, Llama 3.3 Nemotron Super 49B, Llama Nemotron Super 49B v1.5, Nemotron 3 Nano (30B A3B), NVIDIA Nemotron 3 Nano, NVIDIA Nemotron Nano 12B v2 VL, and NVIDIA Nemotron Nano 9B V2, StepFun: Step-2, Step-2-Mini, Step3, step-1-128k, step-1-256k, step-1-32k, step-1-8k, step-1-flash, step-2-16k-exp, and step-r1-v-mini, IBM: Granite 3.0 2B, Granite 3.0 8B, Granite 3.3 8B, Granite 4.0 1B, Granite 4.0 350M, Granite 4.0 8B, Granite 4.0 H 1B, Granite 4.0 H 350M, Granite 4.0 H Small, Granite 4.0 Micro, Granite 4.0 Tiny, and Granite Vision 3.3 2B, Inceptionlabs: Mercury, Mercury Coder Mini, Mercury Coder Small, and Mercury Instruct, Reka AI: Reka Core, Reka Edge, Reka Flash (Feb), Reka Flash, Reka Flash 3, and Reka Flash 3.1, LG AI Research: EXAONE 4.0 32B, EXAONE Deep 32B, and Exaone 4.0 1.2B, Xiaomi: MiMo 7B RL and Mimo-v2-flash-1207-sft, Baidu: ERNIE 4.5, ERNIE 4.5 0.3B, ERNIE 4.5 21B A3B, ERNIE 4.5 300B A47B, ERNIE 4.5 VL 28B A3B, ERNIE 4.5 VL 424B A47B, ERNIE 5.0 Thinking Preview, and ERNIE X1, Baichuan: Baichuan 4 and Baichuan M1 (Preview), vercel: v0-1.0-md, Apple: Apple On-Device and FastVLM, Other: LLaVA-v1.5-7B, Tencent: Hunyuan A13B, Hunyuan 80B A13B, Hunyuan T1, and Hunyuan-TurboS, Prime Intellect: INTELLECT-3, Motif Technologies: Motif-2-12.7B, Korea Telecom: midm-250-pro-rsnsft, Z AI: GLM-4 32B, GLM-4 9B, GLM-4-Air, GLM-4 AirX, GLM-4 FlashX, GLM-4-Long, GLM-4-Plus, GLM-4.1V 9B Thinking, GLM-4.5, GLM-4.5-Air, GLM-4.5V, GLM-4.6, GLM-4.6V, GLM-Z1 32B, GLM-Z1 9B, GLM-Z1 Rumination 32B, and GLM-Zero (Preview), Cohere: Aya Expanse 32B, Aya Expanse 8B, Command, Command A, Command Light, Command R7B, Command-R, Command-R (Mar), Command-R+ (Apr), and Command-R+, Bytedance: Duobao 1.5 Pro, Seed-Thinking-v1.5, Skylark Lite, and Skylark Pro, AI21 Labs: Jamba 1.5 Large, Jamba 1.5 Large (Feb), Jamba 1.5 Mini, Jamba 1.5 Mini (Feb), Jamba 1.6 Large, Jamba 1.6 Mini, Jamba 1.7 Large, Jamba 1.7 Mini, Jamba Instruct, and Jamba Reasoning 3B, Snowflake: Arctic and Snowflake Llama 3.3 70B, PaddlePaddle: PaddleOCR-VL-0.9B, Alibaba: QwQ-32B, QwQ 32B-Preview, Qwen Chat 14B, Qwen Chat 72B, Qwen Chat 7B, Qwen1.5 Chat 110B, Qwen1.5 Chat 14B, Qwen1.5 Chat 32B, Qwen1.5 Chat 72B, Qwen1.5 Chat 7B, Qwen2 72B, Qwen2 Instruct 7B, Qwen2 Instruct A14B 57B, Qwen2-VL 72B, Qwen2.5 Coder 32B, Qwen2.5 Coder 7B , Qwen2.5 Instruct 14B, Qwen2.5 Instruct 32B, Qwen2.5 72B, Qwen2.5 Instruct 7B, Qwen2.5 Max, Qwen2.5 Max 01-29, Qwen2.5 Omni 7B, Qwen2.5 Plus, Qwen2.5 Turbo, Qwen2.5 VL 72B, Qwen2.5 VL 7B, Qwen3 0.6B, Qwen3 1.7B, Qwen3 14B, Qwen3 235B, Qwen3 235B A22B 2507, Qwen3 235B 2507, Qwen3 30B, Qwen3 30B A3B 2507, Qwen3 32B, Qwen3 4B, Qwen3 4B 2507, Qwen3 8B, Qwen3 Coder 30B A3B, Qwen3 Coder 480B, Qwen3 Max, Qwen3 Max (Preview), Qwen3 Max Thinking, Qwen3 Next 80B A3B, Qwen3 Omni 30B A3B, Qwen3 VL 235B A22B, Qwen3 VL 30B A3B, Qwen3 VL 32B, Qwen3 VL 4B, and Qwen3 VL 8B, InclusionAI: Ling-1T, Ling-flash-2.0, Ling-mini-2.0, Ring-1T, and Ring-flash-2.0, 01.AI: Yi-Large and Yi-Lightning, and ByteDance Seed: Doubao Seed Code and Seed-OSS-36B-Instruct.
OpenAI | Open | 131k |
OpenAI | Open | 131k |
OpenAI | Open | 131k |
OpenAI | Proprietary | 200k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 200k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 4k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 200k |
OpenAI | Proprietary | 1m |
OpenAI | Proprietary | 200k |
OpenAI | Proprietary | 200k |
OpenAI | Proprietary | 1m |
OpenAI | Proprietary | 8k |
OpenAI | Proprietary | 1m |
OpenAI | Proprietary | 200k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 200k |
OpenAI | Proprietary | 128k |
OpenAI | Proprietary | 400k |
OpenAI | Proprietary | 4k |
Perplexity | Open | 128k | ||
Perplexity | Proprietary | 200k | ||
Perplexity | Proprietary | 127k | ||
Perplexity | Proprietary | 127k | ||
Perplexity | Proprietary | 127k |
Liquid AI | Open | 33k | ||
Liquid AI | Open | 33k | ||
Liquid AI | Open | 33k | ||
Liquid AI | Proprietary | 32k |
MiniMax | Open | 205k | ||
MiniMax | Open | 1m | ||
MiniMax | Open | 1m |
Kimi | Open | 256k | ||
Kimi | Open | 1m | ||
Kimi | Open | 256k | ||
Kimi | Open | 128k |
Reka AI | Open | 128k | ||
Reka AI | Proprietary | 128k |
Baidu | Open | 131k | ||
Baidu | Proprietary | 128k |
Deep Cogito | Open | 128k |
KwaiKAT | Proprietary | 256k |
Motif Technologies | Proprietary | 128k |
Cohere | Open | 256k | ||
Cohere | Open | 128k | ||
Cohere | Open | 128k |
ServiceNow | Open | 128k | ||
ServiceNow | Open | 128k |
InclusionAI | Open | 128k | ||
InclusionAI | Open | 131k | ||
InclusionAI | Open | 128k | ||
InclusionAI | Open | 128k | ||
InclusionAI | Open | 128k |
ByteDance Seed | Proprietary | 256k | ||
ByteDance Seed | Open | 512k |
OpenChat | Open | 8k |
Databricks | Open | 33k |
Snowflake | Open | 4k |
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v3.0 incorporates 10 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Represents the average of coding evaluations in the Artificial Analysis Intelligence Index. Currently includes: LiveCodeBench, SciCode, Terminal-Bench Hard. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Coding Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Represents the average of coding benchmarks in the Artificial Analysis Intelligence Index (LiveCodeBench, SciCode, Terminal-Bench Hard)","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Represents the average of agentic capabilities benchmarks in the Artificial Analysis Intelligence Index (Terminal-Bench Hard, 𝜏²-Bench Telecom).
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Agentic Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Represents the average of agentic capabilities benchmarks in the Artificial Analysis Intelligence Index (Terminal-Bench Hard, 𝜏²-Bench Telecom)","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index by Open Weights vs Proprietary","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v3.0 incorporates 10 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index by Model Type","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v3.0 incorporates 10 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.
{"@context":"https://schema.org","@type":"Dataset","name":"AA-Omniscience Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":"modelName,omniscienceIndex,detailsUrl,isLabClaimedValue\nGemini 3 Pro Preview (high),12.867,/models/gemini-3-pro/providers,false\nClaude Opus 4.5,10.233,/models/claude-opus-4-5-thinking/providers,false\nClaude 4.1 Opus,4.933,/models/claude-4-1-opus-thinking/providers,false\nGPT-5.1 (high),2.2,/models/gpt-5-1/providers,false\nGrok 4,0.95,/models/grok-4/providers,false\nGemini 3 Pro Preview (low),-1.05,/models/gemini-3-pro-low/providers,false\nClaude 4.5 Sonnet,-2.083,/models/claude-4-5-sonnet-thinking/providers,false\nGPT-5.2 (xhigh),-4.317,/models/gpt-5-2/providers,false\nClaude 4.5 Haiku,-5.667,/models/claude-4-5-haiku-reasoning/providers,false\nClaude Opus 4.5,-6.45,/models/claude-opus-4-5/providers,false\nGrok 3 mini Reasoning (high),-6.9,/models/grok-3-mini-reasoning/providers,false\nClaude 4.5 Haiku,-7.95,/models/claude-4-5-haiku/providers,false\nClaude 4.5 Sonnet,-10.65,/models/claude-4-5-sonnet/providers,false\nGPT-5 (high),-11.1,/models/gpt-5/providers,false\nGPT-5 mini (medium),-12.933,/models/gpt-5-mini-medium/providers,false\nGPT-5 (low),-12.933,/models/gpt-5-low/providers,false\nGPT-5 (medium),-13.733,/models/gpt-5-medium/providers,false\no3,-17.183,/models/o3/providers,false\nGemini 2.5 Pro,-17.95,/models/gemini-2-5-pro/providers,false\nLlama 3.1 405B,-18.167,/models/llama-3-1-instruct-405b/providers,false\nGPT-5.1 Codex mini (high),-18.283,/models/gpt-5-1-codex-mini/providers,false\nDeepSeek V3.2 Speciale,-19.233,/models/deepseek-v3-2-speciale/providers,false\nGPT-5 mini (high),-19.617,/models/gpt-5-mini/providers,false\nDeepSeek V3.2,-23.317,/models/deepseek-v3-2-reasoning/providers,false\nKimi K2 Thinking,-23.417,/models/kimi-k2-thinking/providers,false\nDeepSeek V3.1 Terminus,-26.7,/models/deepseek-v3-1-terminus-reasoning/providers,false\nGPT-5 nano (medium),-27.35,/models/gpt-5-nano-medium/providers,false\nMagistral Medium 1.2,-27.633,/models/magistral-medium-2509/providers,false\nKimi K2 0905,-28.35,/models/kimi-k2-0905/providers,false\nGPT-5 nano (high),-29.65,/models/gpt-5-nano/providers,false\nDeepSeek R1 0528,-29.667,/models/deepseek-r1/providers,false\nGrok 4 Fast,-30.5,/models/grok-4-fast-reasoning/providers,false\nGrok 4.1 Fast,-31.383,/models/grok-4-1-fast-reasoning/providers,false\nDeepSeek V3.2 Exp,-31.9,/models/deepseek-v3-2-reasoning-0925/providers,false\nDevstral Medium,-32.8,/models/devstral-medium/providers,false\nGLM-4.6,-33.25,/models/glm-4-6/providers,false\nHermes 4 405B,-35.067,/models/hermes-4-llama-3-1-405b/providers,false\nKAT-Coder-Pro V1,-35.533,/models/kat-coder-pro-v1/providers,false\nDoubao Seed Code,-35.933,/models/doubao-seed-code/providers,false\nGPT-5.1,-36.583,/models/gpt-5-1-non-reasoning/providers,false\nGPT-5 (minimal),-36.667,/models/gpt-5-minimal/providers,false\nERNIE 4.5 300B A47B,-36.833,/models/ernie-4-5-300b-a47b/providers,false\nHermes 4 405B,-37.367,/models/hermes-4-llama-3-1-405b-reasoning/providers,false\nGemini 2.5 Flash (Sep),-37.5,/models/gemini-2-5-flash-preview-09-2025-reasoning/providers,false\nGrok Code Fast 1,-38.033,/models/grok-code-fast-1/providers,false\nNova Premier,-38.317,/models/nova-premier/providers,false\nQwen3 Max Thinking,-39.783,/models/qwen3-max-thinking/providers,false\nMistral Large 3,-40.983,/models/mistral-large-3/providers,false\nGemini 2.5 Flash (Sep),-41.317,/models/gemini-2-5-flash-preview-09-2025/providers,false\nGPT-4.1,-42.133,/models/gpt-4-1/providers,false\nERNIE 5.0 Thinking Preview,-42.367,/models/ernie-5-0-thinking-preview/providers,false\nNVIDIA Nemotron Nano 9B V2,-43.217,/models/nvidia-nemotron-nano-9b-v2-reasoning/providers,false\nLlama 4 Maverick,-43.467,/models/llama-4-maverick/providers,false\nGemini 2.5 Flash-Lite (Sep),-43.717,/models/gemini-2-5-flash-lite-preview-09-2025/providers,false\nGLM-4.6,-43.883,/models/glm-4-6-reasoning/providers,false\nDeepSeek V3.1 Terminus,-44.583,/models/deepseek-v3-1-terminus/providers,false\nQwen3 Max,-44.9,/models/qwen3-max/providers,false\nQwen3 235B 2507,-45.383,/models/qwen3-235b-a22b-instruct-2507/providers,false\nLlama Nemotron Ultra,-46.2,/models/llama-3-1-nemotron-ultra-253b-v1-reasoning/providers,false\nGLM-4.5V,-46.417,/models/glm-4-5v-reasoning/providers,false\nQwen3 VL 235B A22B,-46.567,/models/qwen3-vl-235b-a22b-reasoning/providers,false\nDeepSeek R1 Distill Llama 70B,-47.433,/models/deepseek-r1-distill-llama-70b/providers,false\nLlama Nemotron Super 49B v1.5,-47.467,/models/llama-nemotron-super-49b-v1-5-reasoning/providers,false\nNova 2.0 Pro Preview (low),-47.5,/models/nova-2-0-pro-reasoning-low/providers,false\nQwen3 235B A22B 2507,-47.7,/models/qwen3-235b-a22b-instruct-2507-reasoning/providers,false\nMistral Medium 3.1,-47.9,/models/mistral-medium-3-1/providers,false\nDevstral 2,-47.917,/models/devstral-2/providers,false\nDeepSeek V3.2,-48.683,/models/deepseek-v3-2/providers,false\nMiniMax-M2,-49.533,/models/minimax-m2/providers,false\nNova 2.0 Pro Preview (medium),-50.3,/models/nova-2-0-pro-reasoning-medium/providers,false\nNova 2.0 Pro Preview,-50.367,/models/nova-2-0-pro/providers,false\nCommand A,-50.4,/models/command-a/providers,false\nHermes 4 70B,-50.717,/models/hermes-4-llama-3-1-70b-reasoning/providers,false\nMistral Small 3.2,-51.3,/models/mistral-small-3-2/providers,false\nNova 2.0 Omni (low),-51.4,/models/nova-2-0-omni-reasoning-low/providers,false\ngpt-oss-120B (high),-51.933,/models/gpt-oss-120b/providers,false\nDevstral Small,-51.967,/models/devstral-small/providers,false\nGrok 4.1 Fast,-52.317,/models/grok-4-1-fast/providers,false\nQwen3 Next 80B A3B,-52.783,/models/qwen3-next-80b-a3b-reasoning/providers,false\nLlama 4 Scout,-53.05,/models/llama-4-scout/providers,false\nQwen3 VL 32B,-53.233,/models/qwen3-vl-32b-reasoning/providers,false\nSeed-OSS-36B-Instruct,-53.533,/models/seed-oss-36b-instruct/providers,false\nQwen3 VL 8B,-53.8,/models/qwen3-vl-8b-instruct/providers,false\nQwen3 VL 235B A22B,-53.867,/models/qwen3-vl-235b-a22b-instruct/providers,false\nQwen3 VL 8B,-54.317,/models/qwen3-vl-8b-reasoning/providers,false\nGemini 2.5 Flash-Lite (Sep),-54.633,/models/gemini-2-5-flash-lite-preview-09-2025-reasoning/providers,false\nNova 2.0 Lite (low),-54.95,/models/nova-2-0-lite-reasoning-low/providers,false\nApriel-v1.5-15B-Thinker,-55.85,/models/apriel-v1-5-15b-thinker/providers,false\ngpt-oss-120B (low),-55.933,/models/gpt-oss-120b-low/providers,false\nQwen3 30B A3B 2507,-57.433,/models/qwen3-30b-a3b-2507-reasoning/providers,false\nSolar Pro 2,-57.533,/models/solar-pro-2-reasoning/providers,false\nNova 2.0 Lite (medium),-57.633,/models/nova-2-0-lite-reasoning-medium/providers,false\nDevstral Small 2,-58.883,/models/devstral-small-2/providers,false\nQwen3 VL 30B A3B,-59.133,/models/qwen3-vl-30b-a3b-reasoning/providers,false\nNova 2.0 Omni (medium),-59.7,/models/nova-2-0-omni-reasoning-medium/providers,false\nApriel-v1.6-15B-Thinker,-59.833,/models/apriel-v1-6-15b-thinker/providers,false\nNova 2.0 Lite,-60.483,/models/nova-2-0-lite/providers,false\nQwen3 Next 80B A3B,-60.483,/models/qwen3-next-80b-a3b-instruct/providers,false\ngpt-oss-20B (low),-60.6,/models/gpt-oss-20b-low/providers,false\nEXAONE 4.0 32B,-61.417,/models/exaone-4-0-32b-reasoning/providers,false\nGranite 4.0 H Small,-62.067,/models/granite-4-0-h-small/providers,false\nMotif-2-12.7B,-62.233,/models/motif-2-12-7b/providers,false\nQwen3 Omni 30B A3B,-62.417,/models/qwen3-omni-30b-a3b-reasoning/providers,false\nGLM-4.5-Air,-63.15,/models/glm-4-5-air/providers,false\nQwen3 VL 32B,-63.9,/models/qwen3-vl-32b-instruct/providers,false\nMinistral 3B (Dec '25),-63.967,/models/ministral-3b/providers,false\nQwen3 VL 30B A3B,-64.033,/models/qwen3-vl-30b-a3b-instruct/providers,false\nDeepSeek R1 0528 Qwen3 8B,-64.617,/models/deepseek-r1-qwen3-8b/providers,false\ngpt-oss-20B (high),-64.9,/models/gpt-oss-20b/providers,false\nQwen3 8B,-66.117,/models/qwen3-8b-instruct-reasoning/providers,false\nEXAONE 4.0 32B,-66.133,/models/exaone-4-0-32b/providers,false\nNVIDIA Nemotron Nano 12B v2 VL,-66.35,/models/nvidia-nemotron-nano-12b-v2-vl-reasoning/providers,false\nGPT-5 nano (minimal),-66.367,/models/gpt-5-nano-minimal/providers,false\nMagistral Small 1.2,-66.383,/models/magistral-small-2509/providers,false\nQwen3 30B A3B 2507,-66.8,/models/qwen3-30b-a3b-2507/providers,false\nMinistral 14B (Dec '25),-67.383,/models/ministral-14b/providers,false\nLing-flash-2.0,-67.45,/models/ling-flash-2-0/providers,false\nMinistral 8B (Dec '25),-69.983,/models/ministral-8b/providers,false\nOLMo 3 7B Think,-73.967,/models/olmo-3-7b-think/providers,false\nQwen3 8B,-75.4,/models/qwen3-8b-instruct/providers,false"}
AA-Omniscience Accuracy (higher is better) measures the proportion of correctly answered questions out of all questions, regardless of whether the model chooses to answer
{"@context":"https://schema.org","@type":"Dataset","name":"AA-Omniscience Accuracy","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"AA-Omniscience Accuracy (higher is better) measures the proportion of correctly answered questions out of all questions, regardless of whether the model chooses to answer","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":"modelName,omniscienceAccuracy,detailsUrl,isLabClaimedValue\nGemini 3 Pro Preview (high),0.5365,/models/gemini-3-pro/providers,false\nGemini 3 Pro Preview (low),0.46016666666666667,/models/gemini-3-pro-low/providers,false\nClaude Opus 4.5,0.43116666666666664,/models/claude-opus-4-5-thinking/providers,false\nGPT-5.2 (xhigh),0.4135,/models/gpt-5-2/providers,false\nGrok 4,0.39566666666666667,/models/grok-4/providers,false\nClaude Opus 4.5,0.389,/models/claude-opus-4-5/providers,false\nGPT-5 (high),0.38616666666666666,/models/gpt-5/providers,false\nGemini 2.5 Pro,0.37483333333333335,/models/gemini-2-5-pro/providers,false\nGPT-5 (medium),0.37383333333333335,/models/gpt-5-medium/providers,false\no3,0.3725,/models/o3/providers,false\nDeepSeek V3.2 Speciale,0.3675,/models/deepseek-v3-2-speciale/providers,false\nGPT-5 (low),0.3635,/models/gpt-5-low/providers,false\nClaude 4.1 Opus,0.35933333333333334,/models/claude-4-1-opus-thinking/providers,false\nGPT-5.1 (high),0.353,/models/gpt-5-1/providers,false\nDeepSeek V3.2,0.32216666666666666,/models/deepseek-v3-2-reasoning/providers,false\nClaude 4.5 Sonnet,0.309,/models/claude-4-5-sonnet-thinking/providers,false\nDeepSeek R1 0528,0.29283333333333333,/models/deepseek-r1/providers,false\nKimi K2 Thinking,0.29233333333333333,/models/kimi-k2-thinking/providers,false\nHermes 4 405B,0.2911666666666667,/models/hermes-4-llama-3-1-405b-reasoning/providers,false\nGPT-5.1,0.278,/models/gpt-5-1-non-reasoning/providers,false\nGPT-5 (minimal),0.27216666666666667,/models/gpt-5-minimal/providers,false\nDeepSeek V3.1 Terminus,0.27166666666666667,/models/deepseek-v3-1-terminus-reasoning/providers,false\nGemini 2.5 Flash (Sep),0.2698333333333333,/models/gemini-2-5-flash-preview-09-2025-reasoning/providers,false\nDeepSeek V3.2 Exp,0.26966666666666667,/models/deepseek-v3-2-reasoning-0925/providers,false\nClaude 4.5 Sonnet,0.2693333333333333,/models/claude-4-5-sonnet/providers,false\nQwen3 Max Thinking,0.26616666666666666,/models/qwen3-max-thinking/providers,false\nGPT-4.1,0.2608333333333333,/models/gpt-4-1/providers,false\nGemini 2.5 Flash (Sep),0.25766666666666665,/models/gemini-2-5-flash-preview-09-2025/providers,false\nGLM-4.6,0.25483333333333336,/models/glm-4-6-reasoning/providers,false\nHermes 4 405B,0.24933333333333332,/models/hermes-4-llama-3-1-405b/providers,false\nKimi K2 0905,0.24033333333333334,/models/kimi-k2-0905/providers,false\nDoubao Seed Code,0.23866666666666667,/models/doubao-seed-code/providers,false\nMistral Large 3,0.23683333333333334,/models/mistral-large-3/providers,false\nLlama 4 Maverick,0.23516666666666666,/models/llama-4-maverick/providers,false\nGrok 4.1 Fast,0.235,/models/grok-4-1-fast-reasoning/providers,false\nQwen3 Max,0.2335,/models/qwen3-max/providers,false\nERNIE 5.0 Thinking Preview,0.23183333333333334,/models/ernie-5-0-thinking-preview/providers,false\nGPT-5 mini (high),0.22966666666666666,/models/gpt-5-mini/providers,false\nDeepSeek V3.2,0.22766666666666666,/models/deepseek-v3-2/providers,false\nGrok Code Fast 1,0.2275,/models/grok-code-fast-1/providers,false\nDeepSeek V3.1 Terminus,0.22616666666666665,/models/deepseek-v3-1-terminus/providers,false\nHermes 4 70B,0.223,/models/hermes-4-llama-3-1-70b-reasoning/providers,false\nQwen3 235B A22B 2507,0.22116666666666668,/models/qwen3-235b-a22b-instruct-2507-reasoning/providers,false\nGrok 4 Fast,0.22033333333333333,/models/grok-4-fast-reasoning/providers,false\nLlama 3.1 405B,0.21833333333333332,/models/llama-3-1-instruct-405b/providers,false\nGPT-5.1 Codex mini (high),0.21733333333333332,/models/gpt-5-1-codex-mini/providers,false\nGPT-5 mini (medium),0.21233333333333335,/models/gpt-5-mini-medium/providers,false\nNova 2.0 Pro Preview (low),0.20983333333333334,/models/nova-2-0-pro-reasoning-low/providers,false\nNova 2.0 Pro Preview (medium),0.2095,/models/nova-2-0-pro-reasoning-medium/providers,false\nMiniMax-M2,0.20833333333333334,/models/minimax-m2/providers,false\nQwen3 VL 235B A22B,0.2045,/models/qwen3-vl-235b-a22b-reasoning/providers,false\nGLM-4.6,0.20266666666666666,/models/glm-4-6/providers,false\nGLM-4.5V,0.20133333333333334,/models/glm-4-5v-reasoning/providers,false\nMagistral Medium 1.2,0.20083333333333334,/models/magistral-medium-2509/providers,false\ngpt-oss-120B (high),0.20016666666666666,/models/gpt-oss-120b/providers,false\nDevstral 2,0.198,/models/devstral-2/providers,false\nQwen3 VL 235B A22B,0.19216666666666668,/models/qwen3-vl-235b-a22b-instruct/providers,false\nLlama Nemotron Ultra,0.192,/models/llama-3-1-nemotron-ultra-253b-v1-reasoning/providers,false\nQwen3 VL 8B,0.19166666666666668,/models/qwen3-vl-8b-instruct/providers,false\nNova Premier,0.18983333333333333,/models/nova-premier/providers,false\nQwen3 VL 8B,0.18983333333333333,/models/qwen3-vl-8b-reasoning/providers,false\nMistral Medium 3.1,0.1895,/models/mistral-medium-3-1/providers,false\nSolar Pro 2,0.18533333333333332,/models/solar-pro-2-reasoning/providers,false\nDeepSeek R1 Distill Llama 70B,0.185,/models/deepseek-r1-distill-llama-70b/providers,false\nKAT-Coder-Pro V1,0.18466666666666667,/models/kat-coder-pro-v1/providers,false\nGPT-5 nano (high),0.18283333333333332,/models/gpt-5-nano/providers,false\nQwen3 Next 80B A3B,0.18216666666666667,/models/qwen3-next-80b-a3b-reasoning/providers,false\nDevstral Medium,0.18166666666666667,/models/devstral-medium/providers,false\ngpt-oss-120B (low),0.1815,/models/gpt-oss-120b-low/providers,false\nERNIE 4.5 300B A47B,0.17833333333333334,/models/ernie-4-5-300b-a47b/providers,false\nNova 2.0 Omni (low),0.1775,/models/nova-2-0-omni-reasoning-low/providers,false\nQwen3 235B 2507,0.17583333333333334,/models/qwen3-235b-a22b-instruct-2507/providers,false\nNova 2.0 Lite (medium),0.17366666666666666,/models/nova-2-0-lite-reasoning-medium/providers,false\nSeed-OSS-36B-Instruct,0.17333333333333334,/models/seed-oss-36b-instruct/providers,false\nNova 2.0 Omni (medium),0.172,/models/nova-2-0-omni-reasoning-medium/providers,false\nGemini 2.5 Flash-Lite (Sep),0.17133333333333334,/models/gemini-2-5-flash-lite-preview-09-2025-reasoning/providers,false\nNova 2.0 Lite (low),0.1675,/models/nova-2-0-lite-reasoning-low/providers,false\nQwen3 Next 80B A3B,0.16716666666666666,/models/qwen3-next-80b-a3b-instruct/providers,false\nApriel-v1.6-15B-Thinker,0.166,/models/apriel-v1-6-15b-thinker/providers,false\nGPT-5 nano (medium),0.16383333333333333,/models/gpt-5-nano-medium/providers,false\nQwen3 VL 32B,0.16366666666666665,/models/qwen3-vl-32b-reasoning/providers,false\nClaude 4.5 Haiku,0.16183333333333333,/models/claude-4-5-haiku-reasoning/providers,false\nLlama Nemotron Super 49B v1.5,0.16083333333333333,/models/llama-nemotron-super-49b-v1-5-reasoning/providers,false\nQwen3 VL 30B A3B,0.15933333333333333,/models/qwen3-vl-30b-a3b-reasoning/providers,false\nNova 2.0 Pro Preview,0.15866666666666668,/models/nova-2-0-pro/providers,false\nGrok 4.1 Fast,0.158,/models/grok-4-1-fast/providers,false\nCommand A,0.15466666666666667,/models/command-a/providers,false\nQwen3 30B A3B 2507,0.15466666666666667,/models/qwen3-30b-a3b-2507-reasoning/providers,false\nApriel-v1.5-15B-Thinker,0.153,/models/apriel-v1-5-15b-thinker/providers,false\nGLM-4.5-Air,0.1505,/models/glm-4-5-air/providers,false\nDevstral Small 2,0.14983333333333335,/models/devstral-small-2/providers,false\nQwen3 VL 30B A3B,0.1475,/models/qwen3-vl-30b-a3b-instruct/providers,false\ngpt-oss-20B (high),0.1465,/models/gpt-oss-20b/providers,false\nQwen3 Omni 30B A3B,0.14383333333333334,/models/qwen3-omni-30b-a3b-reasoning/providers,false\nLlama 4 Scout,0.1435,/models/llama-4-scout/providers,false\nQwen3 30B A3B 2507,0.143,/models/qwen3-30b-a3b-2507/providers,false\nMistral Small 3.2,0.14266666666666666,/models/mistral-small-3-2/providers,false\nMotif-2-12.7B,0.14116666666666666,/models/motif-2-12-7b/providers,false\nQwen3 VL 32B,0.1405,/models/qwen3-vl-32b-instruct/providers,false\nDevstral Small,0.1395,/models/devstral-small/providers,false\nGrok 3 mini Reasoning (high),0.13866666666666666,/models/grok-3-mini-reasoning/providers,false\nLing-flash-2.0,0.13733333333333334,/models/ling-flash-2-0/providers,false\ngpt-oss-20B (low),0.13683333333333333,/models/gpt-oss-20b-low/providers,false\nGranite 4.0 H Small,0.1345,/models/granite-4-0-h-small/providers,false\nClaude 4.5 Haiku,0.13416666666666666,/models/claude-4-5-haiku/providers,false\nGemini 2.5 Flash-Lite (Sep),0.1335,/models/gemini-2-5-flash-lite-preview-09-2025/providers,false\nNVIDIA Nemotron Nano 12B v2 VL,0.1335,/models/nvidia-nemotron-nano-12b-v2-vl-reasoning/providers,false\nEXAONE 4.0 32B,0.13333333333333333,/models/exaone-4-0-32b-reasoning/providers,false\nNova 2.0 Lite,0.13283333333333333,/models/nova-2-0-lite/providers,false\nMagistral Small 1.2,0.12766666666666668,/models/magistral-small-2509/providers,false\nQwen3 8B,0.12733333333333333,/models/qwen3-8b-instruct-reasoning/providers,false\nMinistral 14B (Dec '25),0.12133333333333333,/models/ministral-14b/providers,false\nMinistral 8B (Dec '25),0.11816666666666667,/models/ministral-8b/providers,false\nDeepSeek R1 0528 Qwen3 8B,0.11816666666666667,/models/deepseek-r1-qwen3-8b/providers,false\nGPT-5 nano (minimal),0.11366666666666667,/models/gpt-5-nano-minimal/providers,false\nNVIDIA Nemotron Nano 9B V2,0.10733333333333334,/models/nvidia-nemotron-nano-9b-v2-reasoning/providers,false\nQwen3 8B,0.10283333333333333,/models/qwen3-8b-instruct/providers,false\nOLMo 3 7B Think,0.10166666666666667,/models/olmo-3-7b-think/providers,false\nEXAONE 4.0 32B,0.09933333333333333,/models/exaone-4-0-32b/providers,false\nMinistral 3B (Dec '25),0.07783333333333334,/models/ministral-3b/providers,false"}
AA-Omniscience Hallucination Rate (lower is better) measures how often the model answers incorrectly when it should have refused or admitted to not knowing the answer. It is defined as the proportion of incorrect answers out of all non-correct responses, i.e. incorrect / (incorrect + partial answers + not attempted).
{"@context":"https://schema.org","@type":"Dataset","name":"AA-Omniscience Hallucination Rate","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"AA-Omniscience Hallucination Rate (lower is better) measures how often the model answers incorrectly when it should have refused or admitted to not knowing the answer. It is defined as the proportion of incorrect answers out of all non-correct responses, i.e. incorrect / (incorrect + partial answers + not attempted).","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":"modelName,omniscienceHallucinationRate,detailsUrl,isLabClaimedValue\nQwen3 8B,0.955043655953929,/models/qwen3-8b-instruct/providers,false\nQwen3 30B A3B 2507,0.9463243873978997,/models/qwen3-30b-a3b-2507/providers,false\nLing-flash-2.0,0.9410741885625966,/models/ling-flash-2-0/providers,false\nHermes 4 70B,0.9397254397254398,/models/hermes-4-llama-3-1-70b-reasoning/providers,false\nHermes 4 405B,0.9379261697625205,/models/hermes-4-llama-3-1-405b-reasoning/providers,false\nOLMo 3 7B Think,0.9365491651205937,/models/olmo-3-7b-think/providers,false\nSolar Pro 2,0.9337152209492635,/models/solar-pro-2-reasoning/providers,false\ngpt-oss-20B (high),0.9320445225541887,/models/gpt-oss-20b/providers,false\nGLM-4.6,0.9308879445314248,/models/glm-4-6-reasoning/providers,false\nNova 2.0 Omni (medium),0.928743961352657,/models/nova-2-0-omni-reasoning-medium/providers,false\nMinistral 8B (Dec '25),0.9276129276129276,/models/ministral-8b/providers,false\nQwen3 Next 80B A3B,0.9269561737042226,/models/qwen3-next-80b-a3b-instruct/providers,false\nDeepSeek V3.2,0.9251186879585671,/models/deepseek-v3-2/providers,false\nQwen3 VL 30B A3B,0.9241446725317694,/models/qwen3-vl-30b-a3b-instruct/providers,false\nGPT-4.1,0.9228861330326945,/models/gpt-4-1/providers,false\nGLM-4.5-Air,0.9205414949970571,/models/glm-4-5-air/providers,false\nNVIDIA Nemotron Nano 12B v2 VL,0.9197922677437969,/models/nvidia-nemotron-nano-12b-v2-vl-reasoning/providers,false\nApriel-v1.6-15B-Thinker,0.916466826538769,/models/apriel-v1-6-15b-thinker/providers,false\nNova 2.0 Lite (medium),0.9076240419524002,/models/nova-2-0-lite-reasoning-medium/providers,false\nMagistral Small 1.2,0.9073366450133741,/models/magistral-small-2509/providers,false\nQwen3 VL 32B,0.9069226294357184,/models/qwen3-vl-32b-instruct/providers,false\ngpt-oss-120B (low),0.9051109753614335,/models/gpt-oss-120b-low/providers,false\nMinistral 14B (Dec '25),0.904969650986343,/models/ministral-14b/providers,false\nQwen3 Max Thinking,0.9048376107199637,/models/qwen3-max-thinking/providers,false\nQwen3 VL 8B,0.9047521086196256,/models/qwen3-vl-8b-reasoning/providers,false\nQwen3 VL 235B A22B,0.904683309263462,/models/qwen3-vl-235b-a22b-instruct/providers,false\nGemini 2.5 Flash (Sep),0.9036820835204311,/models/gemini-2-5-flash-preview-09-2025/providers,false\nQwen3 8B,0.9035523300229182,/models/qwen3-8b-instruct-reasoning/providers,false\nQwen3 VL 8B,0.902680412371134,/models/qwen3-vl-8b-instruct/providers,false\nNova 2.0 Pro Preview (medium),0.9013282732447818,/models/nova-2-0-pro-reasoning-medium/providers,false\ngpt-oss-120B (high),0.899562408835174,/models/gpt-oss-120b/providers,false\nQwen3 Omni 30B A3B,0.8970216079423788,/models/qwen3-omni-30b-a3b-reasoning/providers,false\nQwen3 235B A22B 2507,0.8964262786218703,/models/qwen3-235b-a22b-instruct-2507-reasoning/providers,false\nQwen3 VL 30B A3B,0.8929421094369548,/models/qwen3-vl-30b-a3b-reasoning/providers,false\nGPT-5.1,0.891735918744229,/models/gpt-5-1-non-reasoning/providers,false\nQwen3 Max,0.8904109589041096,/models/qwen3-max/providers,false\nMotif-2-12.7B,0.8889967009509024,/models/motif-2-12-7b/providers,false\nMiniMax-M2,0.8888421052631579,/models/minimax-m2/providers,false\nGemini 2.5 Pro,0.8866968808317782,/models/gemini-2-5-pro/providers,false\nDeepSeek V3.2 Speciale,0.8851119894598155,/models/deepseek-v3-2-speciale/providers,false\nGemini 2.5 Flash (Sep),0.883131705090162,/models/gemini-2-5-flash-preview-09-2025-reasoning/providers,false\nGemini 3 Pro Preview (high),0.8798993167925206,/models/gemini-3-pro/providers,false\nGPT-5 (minimal),0.8777192580719029,/models/gpt-5-minimal/providers,false\nGPT-5 nano (minimal),0.8770214366303122,/models/gpt-5-nano-minimal/providers,false\nLlama 4 Maverick,0.8757899324471562,/models/llama-4-maverick/providers,false\nGranite 4.0 H Small,0.8725207009435779,/models/granite-4-0-h-small/providers,false\nGemini 3 Pro Preview (low),0.8718740351960481,/models/gemini-3-pro-low/providers,false\nDevstral Small 2,0.8688492452460302,/models/devstral-small-2/providers,false\nDeepSeek V3.1 Terminus,0.8684040491061813,/models/deepseek-v3-1-terminus/providers,false\nQwen3 Next 80B A3B,0.8681475443244345,/models/qwen3-next-80b-a3b-reasoning/providers,false\no3,0.8674634794156707,/models/o3/providers,false\nDeepSeek R1 0528 Qwen3 8B,0.8667548667548668,/models/deepseek-r1-qwen3-8b/providers,false\nNova 2.0 Pro Preview (low),0.8666947901286648,/models/nova-2-0-pro-reasoning-low/providers,false\nGemini 2.5 Flash-Lite (Sep),0.8660498793242156,/models/gemini-2-5-flash-lite-preview-09-2025-reasoning/providers,false\nEXAONE 4.0 32B,0.8625,/models/exaone-4-0-32b-reasoning/providers,false\nQwen3 30B A3B 2507,0.8623817034700315,/models/qwen3-30b-a3b-2507-reasoning/providers,false\nNova 2.0 Lite (low),0.8612612612612612,/models/nova-2-0-lite-reasoning-low/providers,false\ngpt-oss-20B (low),0.8605908476539873,/models/gpt-oss-20b-low/providers,false\nSeed-OSS-36B-Instruct,0.8572580645161291,/models/seed-oss-36b-instruct/providers,false\nERNIE 5.0 Thinking Preview,0.8533304404426123,/models/ernie-5-0-thinking-preview/providers,false\nNova 2.0 Lite,0.8506630789928887,/models/nova-2-0-lite/providers,false\nMistral Large 3,0.8473465822231928,/models/mistral-large-3/providers,false\nEXAONE 4.0 32B,0.844559585492228,/models/exaone-4-0-32b/providers,false\nDevstral 2,0.8443474646716542,/models/devstral-2/providers,false\nQwen3 VL 235B A22B,0.8424470982610518,/models/qwen3-vl-235b-a22b-reasoning/providers,false\nNova 2.0 Omni (low),0.8407294832826747,/models/nova-2-0-omni-reasoning-low/providers,false\nApriel-v1.5-15B-Thinker,0.8400236127508854,/models/apriel-v1-5-15b-thinker/providers,false\nDeepSeek R1 0528,0.8336082960169692,/models/deepseek-r1/providers,false\nGLM-4.5V,0.8332637729549248,/models/glm-4-5v-reasoning/providers,false\nQwen3 VL 32B,0.8322040653646872,/models/qwen3-vl-32b-reasoning/providers,false\nMistral Medium 3.1,0.824799506477483,/models/mistral-medium-3-1/providers,false\nDeepSeek V3.2,0.8192771084337349,/models/deepseek-v3-2-reasoning/providers,false\nGPT-5 (medium),0.8163428267234496,/models/gpt-5-medium/providers,false\nGPT-5 (high),0.8099375509095845,/models/gpt-5/providers,false\nLlama Nemotron Ultra,0.8094059405940595,/models/llama-3-1-nemotron-ultra-253b-v1-reasoning/providers,false\nDeepSeek R1 Distill Llama 70B,0.808997955010225,/models/deepseek-r1-distill-llama-70b/providers,false\nGrok 4.1 Fast,0.8089865399841647,/models/grok-4-1-fast/providers,false\nDeepSeek V3.2 Exp,0.8060246462802373,/models/deepseek-v3-2-reasoning-0925/providers,false\nHermes 4 405B,0.7992895204262878,/models/hermes-4-llama-3-1-405b/providers,false\nNova 2.0 Pro Preview,0.7872424722662441,/models/nova-2-0-pro/providers,false\nLlama 4 Scout,0.7869235259778167,/models/llama-4-scout/providers,false\nGrok Code Fast 1,0.786839266450917,/models/grok-code-fast-1/providers,false\nDoubao Seed Code,0.7854640980735552,/models/doubao-seed-code/providers,false\nCommand A,0.7791798107255521,/models/command-a/providers,false\nGPT-5.2 (xhigh),0.7786302926967889,/models/gpt-5-2/providers,false\nMinistral 3B (Dec '25),0.7780589192120008,/models/ministral-3b/providers,false\nGPT-5 (low),0.7742864624247185,/models/gpt-5-low/providers,false\nDevstral Small,0.7660275033895022,/models/devstral-small/providers,false\nMistral Small 3.2,0.7647744945567652,/models/mistral-small-3-2/providers,false\nQwen3 235B 2507,0.7640040444893832,/models/qwen3-235b-a22b-instruct-2507/providers,false\nLlama Nemotron Super 49B v1.5,0.7572989076464747,/models/llama-nemotron-super-49b-v1-5-reasoning/providers,false\nKimi K2 Thinking,0.7439943476212906,/models/kimi-k2-thinking/providers,false\nClaude Opus 4.5,0.7422258592471358,/models/claude-opus-4-5/providers,false\nDeepSeek V3.1 Terminus,0.7395881006864988,/models/deepseek-v3-1-terminus-reasoning/providers,false\nGrok 4.1 Fast,0.7174291938997821,/models/grok-4-1-fast-reasoning/providers,false\nNova Premier,0.707261880271549,/models/nova-premier/providers,false\nKimi K2 0905,0.6895568231680561,/models/kimi-k2-0905/providers,false\nGrok 4 Fast,0.6737922188969645,/models/grok-4-fast-reasoning/providers,false\nGLM-4.6,0.6711956521739131,/models/glm-4-6/providers,false\nERNIE 4.5 300B A47B,0.665314401622718,/models/ernie-4-5-300b-a47b/providers,false\nKAT-Coder-Pro V1,0.6623058053965658,/models/kat-coder-pro-v1/providers,false\nGemini 2.5 Flash-Lite (Sep),0.6585881900365455,/models/gemini-2-5-flash-lite-preview-09-2025/providers,false\nGrok 4,0.638996138996139,/models/grok-4/providers,false\nDevstral Medium,0.6228105906313646,/models/devstral-medium/providers,false\nNVIDIA Nemotron Nano 9B V2,0.6043689320388349,/models/nvidia-nemotron-nano-9b-v2-reasoning/providers,false\nMagistral Medium 1.2,0.5970802919708029,/models/magistral-medium-2509/providers,false\nGPT-5 nano (high),0.5865796451152355,/models/gpt-5-nano/providers,false\nClaude Opus 4.5,0.5780837972458248,/models/claude-opus-4-5-thinking/providers,false\nGPT-5 mini (high),0.5527909995672868,/models/gpt-5-mini/providers,false\nGPT-5 nano (medium),0.523021726131154,/models/gpt-5-nano-medium/providers,false\nClaude 4.5 Sonnet,0.5143704379562044,/models/claude-4-5-sonnet/providers,false\nLlama 3.1 405B,0.511727078891258,/models/llama-3-1-instruct-405b/providers,false\nGPT-5.1 (high),0.5115919629057187,/models/gpt-5-1/providers,false\nGPT-5.1 Codex mini (high),0.5112862010221465,/models/gpt-5-1-codex-mini/providers,false\nClaude 4.1 Opus,0.4838709677419355,/models/claude-4-1-opus-thinking/providers,false\nClaude 4.5 Sonnet,0.47732754462132176,/models/claude-4-5-sonnet-thinking/providers,false\nGPT-5 mini (medium),0.43377063055438003,/models/gpt-5-mini-medium/providers,false\nClaude 4.5 Haiku,0.2606880095446411,/models/claude-4-5-haiku-reasoning/providers,false\nClaude 4.5 Haiku,0.2467757459095284,/models/claude-4-5-haiku/providers,false\nGrok 3 mini Reasoning (high),0.24109907120743035,/models/grok-3-mini-reasoning/providers,false"}
While higher intelligence models are typically more expensive, they do not all follow the same price-quality curve.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
There is a trade-off between model quality and output speed, with higher intelligence models typically having lower output speed.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Seconds to receive a 500 token response. Key components:
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Larger context windows are relevant to RAG (Retrieval Augmented Generation) LLM workflows which typically involve reasoning and information retrieval of large amounts of data.
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
{"@context":"https://schema.org","@type":"Dataset","name":"Context Window","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Context Window: Tokens Limit; Higher is better","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
One-time cost charged when storing a prompt in the cache for future reuse, represented as USD per million tokens.
Price per token for cached prompts (previously processed), typically offering a significant discount compared to regular input price, represented as USD per million tokens.
Cost to maintain tokens in cache storage, charged per million tokens per hour. Currently only applicable to Google's Gemini models.
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Price for 1,000 images at a resolution of 1 Megapixel (1024 x 1024) processed by the model.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
While higher intelligence models are typically more expensive, they do not all follow the same price-quality curve.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Measured by Output Speed (tokens per second)
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
{"@context":"https://schema.org","@type":"Dataset","name":"Output Speed","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Output Tokens per Second; Higher is better","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Measured by Time (seconds) to First Token
Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
{"@context":"https://schema.org","@type":"Dataset","name":"Latency: Time To First Token","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Seconds to First Token Received; Lower is better","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.

Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
Seconds to receive a 500 token response. Key components:
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Seconds to receive a 500 token response. Key components:
Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
The total number of trainable weights and biases in the model, expressed in billions. These parameters are learned during training and determine the model's ability to process and generate responses.
The number of parameters actually executed during each inference forward pass, expressed in billions. For Mixture of Experts (MoE) models, a routing mechanism selects a subset of experts per token, resulting in fewer active than total parameters. Dense models use all parameters, so active equals total.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
The number of parameters actually executed during each inference forward pass, expressed in billions. For Mixture of Experts (MoE) models, a routing mechanism selects a subset of experts per token, resulting in fewer active than total parameters. Dense models use all parameters, so active equals total.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
The total number of trainable weights and biases in the model, expressed in billions. These parameters are learned during training and determine the model's ability to process and generate responses.