Stay connected with us on X, Discord, and LinkedIn to stay up to date with future analysis

Comparison of Models: Intelligence, Performance & Price Analysis

Comparison and analysis of AI models across key performance metrics including quality, price, output speed, latency, context window & others. Click on any model to see detailed metrics. For more details including relating to our methodology, see our FAQs.

Model Comparison Summary

Intelligence:GPT-5.2 (xhigh) logo GPT-5.2 (xhigh) and Claude Opus 4.5 logo Claude Opus 4.5 are the highest intelligence models, followed by GPT-5.2 Codex (xhigh) logo GPT-5.2 Codex (xhigh) & Gemini 3 Pro Preview (high) logo Gemini 3 Pro Preview (high).Output Speed (tokens/s):Gemini 2.5 Flash-Lite (Sep) logo Gemini 2.5 Flash-Lite (Sep) (576 t/s) and Granite 3.3 8B logo Granite 3.3 8B (488 t/s) are the fastest models, followed by Gemini 2.5 Flash-Lite (Sep) logo Gemini 2.5 Flash-Lite (Sep) & Nova Micro logo Nova Micro.Latency (seconds):Apriel-v1.5-15B-Thinker logo Apriel-v1.5-15B-Thinker (0.18s) and  NVIDIA Nemotron Nano 12B v2 VL logo NVIDIA Nemotron Nano 12B v2 VL (0.21s) are the lowest latency models, followed by NVIDIA Nemotron 3 Nano logo NVIDIA Nemotron 3 Nano & Olmo 3.1 32B Instruct logo Olmo 3.1 32B Instruct.Price ($ per M tokens):Gemma 3n E4B logo Gemma 3n E4B ($0.03) and DeepSeek-OCR logo DeepSeek-OCR ($0.05) are the cheapest models, followed by Llama 3.2 1B logo Llama 3.2 1B & Llama 3.2 3B logo Llama 3.2 3B.Context Window:Llama 4 Scout logo Llama 4 Scout (10m) and Grok 4.1 Fast logo Grok 4.1 Fast (2m) are the largest context window models, followed by Grok 4.1 Fast logo Grok 4.1 Fast & Gemini 2.0 Pro Experimental logo Gemini 2.0 Pro Experimental.

Highlights

Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:

Intelligence

Artificial Analysis Intelligence Index

Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}

Artificial Analysis Intelligence Index by Open Weights vs Proprietary

Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt
Proprietary
Open Weights

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).

{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index by Open Weights vs Proprietary","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}

Intelligence Evaluations

Intelligence evaluations measured independently by Artificial Analysis; Higher is better
Results claimed by AI Lab (not yet independently verified)
GDPval-AA ((ELO-500)/2000)
Terminal-Bench Hard (Agentic Coding & Terminal Use)
𝜏²-Bench Telecom (Agentic Tool Use)
AA-LCR (Long Context Reasoning)
AA-Omniscience Accuracy (Knowledge)
AA-Omniscience Non-Hallucination Rate (1 - Hallucination Rate)
Humanity's Last Exam (Reasoning & Knowledge)
GPQA Diamond (Scientific Reasoning)
SciCode (Coding)
IFBench (Instruction Following)
CritPt (Physics Reasoning)
MMMU Pro (Visual Reasoning)

While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Artificial Analysis Omniscience

AA-Omniscience Index

AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.

AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.

{"@context":"https://schema.org","@type":"Dataset","name":"AA-Omniscience Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"AA-Omniscience Index (higher is better) measures knowledge reliability and hallucination. It rewards correct answers, penalizes hallucinations, and has no penalty for refusing to answer. Scores range from -100 to 100, where 0 means as many correct as incorrect answers, and negative scores mean more incorrect than correct.","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":"modelName,omniscienceIndex,detailsUrl,isLabClaimedValue\nGemini 3 Pro Preview (high),12.867,/models/gemini-3-pro/providers,false\nClaude Opus 4.5,10.233,/models/claude-opus-4-5-thinking/providers,false\nGemini 3 Flash,8.233,/models/gemini-3-flash-reasoning/providers,false\nClaude 4.1 Opus,4.933,/models/claude-4-1-opus-thinking/providers,false\nGPT-5.1 (high),2.2,/models/gpt-5-1/providers,false\nGrok 4,0.95,/models/grok-4/providers,false\nRing-1T,0.017,/models/ring-1t/providers,false\nJamba 1.7 Large,-0.217,/models/jamba-1-7-large/providers,false\nLing-mini-2.0,-0.45,/models/ling-mini-2-0/providers,false\nJamba 1.7 Mini,-0.5,/models/jamba-1-7-mini/providers,false\nGemini 3 Flash,-0.917,/models/gemini-3-flash/providers,false\nGemini 3 Pro Preview (low),-1.05,/models/gemini-3-pro-low/providers,false\nClaude 3.7 Sonnet,-1.733,/models/claude-3-7-sonnet-thinking/providers,false\nClaude 4 Sonnet,-1.767,/models/claude-4-sonnet-thinking/providers,false\nClaude 4.5 Sonnet,-2.083,/models/claude-4-5-sonnet-thinking/providers,false\nGPT-5.2 (medium),-2.7,/models/gpt-5-2-medium/providers,false\nGPT-5.2 (xhigh),-4.317,/models/gpt-5-2/providers,false\nGPT-5.2 Codex (xhigh),-5.55,/models/gpt-5-2-codex/providers,false\nClaude 4.5 Haiku,-5.667,/models/claude-4-5-haiku-reasoning/providers,false\nClaude Opus 4.5,-6.45,/models/claude-opus-4-5/providers,false\nGPT-5.1 Codex (high),-7.017,/models/gpt-5-1-codex/providers,false\nGrok 3 mini Reasoning (high),-7.15,/models/grok-3-mini-reasoning/providers,false\nClaude 4.5 Haiku,-7.95,/models/claude-4-5-haiku/providers,false\nGPT-5 Codex (high),-9.667,/models/gpt-5-codex/providers,false\nClaude 4 Sonnet,-10.367,/models/claude-4-sonnet/providers,false\nClaude 4.5 Sonnet,-10.65,/models/claude-4-5-sonnet/providers,false\nClaude 3.7 Sonnet,-10.983,/models/claude-3-7-sonnet/providers,false\nGPT-5 (high),-11.1,/models/gpt-5/providers,false\nGPT-4o (Nov),-12.05,/models/gpt-4o/providers,false\no1,-12.817,/models/o1/providers,false\nGPT-5 (low),-12.933,/models/gpt-5-low/providers,false\nGPT-5 mini (medium),-12.933,/models/gpt-5-mini-medium/providers,false\nGPT-5 (medium),-13.733,/models/gpt-5-medium/providers,false\nGPT-5.2,-15.4,/models/gpt-5-2-non-reasoning/providers,false\no3,-17.183,/models/o3/providers,false\nGemini 2.5 Pro,-17.95,/models/gemini-2-5-pro/providers,false\nLlama 3.1 405B,-18.167,/models/llama-3-1-instruct-405b/providers,false\nGPT-5.1 Codex mini (high),-18.283,/models/gpt-5-1-codex-mini/providers,false\nDeepSeek V3.2 Speciale,-19.233,/models/deepseek-v3-2-speciale/providers,false\nGPT-5 mini (high),-19.617,/models/gpt-5-mini/providers,false\nGPT-4o (Aug),-21.733,/models/gpt-4o-2024-08-06/providers,false\nDeepSeek V3.2,-23.317,/models/deepseek-v3-2-reasoning/providers,false\nClaude 3.5 Haiku,-23.35,/models/claude-3-5-haiku/providers,false\nKimi K2 Thinking,-23.417,/models/kimi-k2-thinking/providers,false\nQwen3 Coder 480B,-23.967,/models/qwen3-coder-480b-a35b-instruct/providers,false\nGLM-4.6V,-26.25,/models/glm-4-6v-reasoning/providers,false\nDeepSeek V3.1 Terminus,-26.7,/models/deepseek-v3-1-terminus-reasoning/providers,false\nGPT-5 nano (medium),-27.35,/models/gpt-5-nano-medium/providers,false\nCogito v2.1,-27.417,/models/cogito-v2-1-reasoning/providers,false\nMagistral Medium 1.2,-27.633,/models/magistral-medium-2509/providers,false\nMagistral Medium 1,-28,/models/magistral-medium/providers,false\nKimi K2 0905,-28.35,/models/kimi-k2-0905/providers,false\nGLM-4.5,-29.017,/models/glm-4.5/providers,false\nGPT-5 nano (high),-29.65,/models/gpt-5-nano/providers,false\nDeepSeek R1 0528,-29.667,/models/deepseek-r1/providers,false\nMiniMax-M2.1,-29.8,/models/minimax-m2-1/providers,false\nKimi K2,-30.117,/models/kimi-k2/providers,false\nGrok 4 Fast,-30.5,/models/grok-4-fast-reasoning/providers,false\nDeepSeek V3.1,-30.583,/models/deepseek-v3-1-reasoning/providers,false\nGemini 2.5 Flash,-30.85,/models/gemini-2-5-flash-reasoning/providers,false\nGrok 4.1 Fast,-31.383,/models/grok-4-1-fast-reasoning/providers,false\nLlama 3.1 8B,-31.6,/models/llama-3-1-instruct-8b/providers,false\nDeepSeek V3.2 Exp,-31.9,/models/deepseek-v3-2-reasoning-0925/providers,false\nMistral Medium 3,-32.617,/models/mistral-medium-3/providers,false\nDevstral Medium,-32.8,/models/devstral-medium/providers,false\nGLM-4.6,-33.25,/models/glm-4-6/providers,false\nDeepSeek R1 (Jan),-33.633,/models/deepseek-r1-0120/providers,false\nHermes 4 405B,-34.633,/models/hermes-4-llama-3-1-405b/providers,false\nGrok 3,-35.267,/models/grok-3/providers,false\nMistral Large 2 (Nov),-35.517,/models/mistral-large-2/providers,false\nKAT-Coder-Pro V1,-35.533,/models/kat-coder-pro-v1/providers,false\nDoubao Seed Code,-35.933,/models/doubao-seed-code/providers,false\nGLM-4.7,-36.267,/models/glm-4-7/providers,false\nGPT-5.1,-36.583,/models/gpt-5-1-non-reasoning/providers,false\nGPT-5 (minimal),-36.667,/models/gpt-5-minimal/providers,false\nERNIE 4.5 300B A47B,-36.833,/models/ernie-4-5-300b-a47b/providers,false\no4-mini (high),-37.183,/models/o4-mini/providers,false\nHermes 4 405B,-37.367,/models/hermes-4-llama-3-1-405b-reasoning/providers,false\nGemini 2.5 Flash (Sep),-37.5,/models/gemini-2-5-flash-preview-09-2025-reasoning/providers,false\nGrok Code Fast 1,-38.033,/models/grok-code-fast-1/providers,false\nNova Premier,-38.317,/models/nova-premier/providers,false\nGLM-4.6V,-38.65,/models/glm-4-6v/providers,false\nOlmo 3.1 32B Think,-39.483,/models/olmo-3-1-32b-think/providers,false\nQwen3 Max Thinking,-39.783,/models/qwen3-max-thinking/providers,false\nMistral Large 3,-40.983,/models/mistral-large-3/providers,false\nGemini 2.5 Flash (Sep),-41.317,/models/gemini-2-5-flash-preview-09-2025/providers,false\nLlama 3.1 Nemotron 70B,-41.417,/models/llama-3-1-nemotron-instruct-70b/providers,false\nMiMo-V2-Flash,-41.833,/models/mimo-v2-flash-reasoning/providers,false\nGPT-4.1,-42.133,/models/gpt-4-1/providers,false\nDeepSeek V3 0324,-42.283,/models/deepseek-v3-0324/providers,false\nERNIE 5.0 Thinking Preview,-42.367,/models/ernie-5-0-thinking-preview/providers,false\nNVIDIA Nemotron Nano 9B V2,-43.217,/models/nvidia-nemotron-nano-9b-v2-reasoning/providers,false\nLlama 4 Maverick,-43.467,/models/llama-4-maverick/providers,false\nDeepSeek V3.1,-43.533,/models/deepseek-v3-1/providers,false\nNova Lite,-43.55,/models/nova-lite/providers,false\nQwen3 Max (Preview),-43.567,/models/qwen3-max-preview/providers,false\nDeepSeek V3 (Dec),-43.633,/models/deepseek-v3/providers,false\nGemini 2.5 Flash-Lite (Sep),-43.717,/models/gemini-2-5-flash-lite-preview-09-2025/providers,false\nGemini 2.5 Flash,-43.75,/models/gemini-2-5-flash/providers,false\nGLM-4.6,-43.883,/models/glm-4-6-reasoning/providers,false\no3-mini (high),-44.283,/models/o3-mini-high/providers,false\nGemini 2.0 Flash,-44.333,/models/gemini-2-0-flash/providers,false\nLlama 3.1 70B,-44.417,/models/llama-3-1-instruct-70b/providers,false\nDeepSeek V3.1 Terminus,-44.583,/models/deepseek-v3-1-terminus/providers,false\nMiMo-V2-Flash,-44.6,/models/mimo-v2-flash/providers,false\nQwen3 Max,-44.9,/models/qwen3-max/providers,false\nMagistral Small 1,-45.2,/models/magistral-small/providers,false\nQwen3 235B 2507,-45.383,/models/qwen3-235b-a22b-instruct-2507/providers,false\nQwen3 235B,-45.55,/models/qwen3-235b-a22b-instruct-reasoning/providers,false\nLlama Nemotron Ultra,-46.2,/models/llama-3-1-nemotron-ultra-253b-v1-reasoning/providers,false\nGLM-4.5V,-46.417,/models/glm-4-5v-reasoning/providers,false\nQwen3 VL 235B A22B,-46.567,/models/qwen3-vl-235b-a22b-reasoning/providers,false\nGemini 2.5 Flash-Lite,-46.983,/models/gemini-2-5-flash-lite-reasoning/providers,false\nLlama Nemotron Super 49B v1.5,-47.2,/models/llama-nemotron-super-49b-v1-5/providers,false\nDeepSeek R1 Distill Llama 70B,-47.433,/models/deepseek-r1-distill-llama-70b/providers,false\nLlama Nemotron Super 49B v1.5,-47.467,/models/llama-nemotron-super-49b-v1-5-reasoning/providers,false\nNova 2.0 Pro Preview (low),-47.5,/models/nova-2-0-pro-reasoning-low/providers,false\nQwen3 235B A22B 2507,-47.7,/models/qwen3-235b-a22b-instruct-2507-reasoning/providers,false\nMistral Medium 3.1,-47.9,/models/mistral-medium-3-1/providers,false\nDevstral 2,-47.917,/models/devstral-2/providers,false\nGLM-4.7,-48.233,/models/glm-4-7-non-reasoning/providers,false\nNova Pro,-48.517,/models/nova-pro/providers,false\nDeepSeek V3.2,-48.683,/models/deepseek-v3-2/providers,false\nK2-V2 (low),-49.017,/models/k2-v2-low/providers,false\nDeepSeek V3.2 Exp,-49.117,/models/deepseek-v3-2-0925/providers,false\nNova Micro,-49.35,/models/nova-micro/providers,false\nMiniMax-M2,-49.533,/models/minimax-m2/providers,false\nCommand A,-49.583,/models/command-a/providers,false\nHermes 4 70B,-50.033,/models/hermes-4-llama-3-1-70b/providers,false\nMiniMax M1 80k,-50.167,/models/minimax-m1-80k/providers,false\nNova 2.0 Pro Preview (medium),-50.3,/models/nova-2-0-pro-reasoning-medium/providers,false\nQwen3 14B,-50.317,/models/qwen3-14b-instruct-reasoning/providers,false\nNova 2.0 Pro Preview,-50.367,/models/nova-2-0-pro/providers,false\nK2-V2 (medium),-50.6,/models/k2-v2-medium/providers,false\nHermes 4 70B,-50.717,/models/hermes-4-llama-3-1-70b-reasoning/providers,false\nLlama 3.3 Nemotron Super 49B,-51.017,/models/llama-3-3-nemotron-super-49b/providers,false\nMistral Small 3.2,-51.3,/models/mistral-small-3-2/providers,false\nNova 2.0 Omni (low),-51.4,/models/nova-2-0-omni-reasoning-low/providers,false\nQwen3 32B,-51.5,/models/qwen3-32b-instruct-reasoning/providers,false\nQwen3 Coder 30B A3B,-51.7,/models/qwen3-coder-30b-a3b-instruct/providers,false\ngpt-oss-120B (high),-51.933,/models/gpt-oss-120b/providers,false\nDevstral Small,-51.967,/models/devstral-small/providers,false\nHyperCLOVA X SEED Think (32B),-51.983,/models/hyperclova-x-seed-think-32b/providers,false\nMistral Small 3.1,-52.183,/models/mistral-small-3-1/providers,false\nGrok 4.1 Fast,-52.317,/models/grok-4-1-fast/providers,false\nQwen3 30B,-52.333,/models/qwen3-30b-a3b-instruct-reasoning/providers,false\nNVIDIA Nemotron 3 Nano,-52.383,/models/nvidia-nemotron-3-nano-30b-a3b-reasoning/providers,false\nINTELLECT-3,-52.383,/models/intellect-3/providers,false\nQwen3 Next 80B A3B,-52.783,/models/qwen3-next-80b-a3b-reasoning/providers,false\nLlama 4 Scout,-53.05,/models/llama-4-scout/providers,false\nQwen3 VL 32B,-53.233,/models/qwen3-vl-32b-reasoning/providers,false\nOlmo 3.1 32B Instruct,-53.317,/models/olmo-3-1-32b-instruct/providers,false\nQwen2.5 72B,-53.517,/models/qwen2-5-72b-instruct/providers,false\nSeed-OSS-36B-Instruct,-53.533,/models/seed-oss-36b-instruct/providers,false\nQwen3 VL 8B,-53.8,/models/qwen3-vl-8b-instruct/providers,false\nQwen3 4B 2507,-53.833,/models/qwen3-4b-2507-instruct/providers,false\nQwen3 VL 235B A22B,-53.867,/models/qwen3-vl-235b-a22b-instruct/providers,false\nQwen3 VL 8B,-54.317,/models/qwen3-vl-8b-reasoning/providers,false\nQwen3 235B,-54.333,/models/qwen3-235b-a22b-instruct/providers,false\nGemini 2.5 Flash-Lite (Sep),-54.633,/models/gemini-2-5-flash-lite-preview-09-2025-reasoning/providers,false\nQwen3 4B 2507,-54.667,/models/qwen3-4b-2507-instruct-reasoning/providers,false\nNova 2.0 Lite (low),-54.95,/models/nova-2-0-lite-reasoning-low/providers,false\nLFM2 2.6B,-54.95,/models/lfm2-2-6b/providers,false\nLlama 3 70B,-54.95,/models/llama-3-instruct-70b/providers,false\nMi:dm K 2.5 Pro,-55.217,/models/mi-dm-k-2-5-pro-dec28/providers,false\nLlama 3.2 1B,-55.433,/models/llama-3-2-instruct-1b/providers,false\nLlama 3.3 70B,-55.467,/models/llama-3-3-instruct-70b/providers,false\nGPT-5 mini (minimal),-55.6,/models/gpt-5-mini-minimal/providers,false\nGrok 4 Fast,-55.683,/models/grok-4-fast/providers,false\nGPT-4.1 mini,-55.7,/models/gpt-4-1-mini/providers,false\nApriel-v1.5-15B-Thinker,-55.85,/models/apriel-v1-5-15b-thinker/providers,false\ngpt-oss-120B (low),-55.933,/models/gpt-oss-120b-low/providers,false\nMi:dm K 2.5 Pro Preview,-56.017,/models/midm-250-pro-rsnsft/providers,false\nSolar Open 100B,-56.133,/models/solar-open-100b-reasoning/providers,false\nPhi-4,-56.167,/models/phi-4/providers,false\nGLM-4.5V,-56.867,/models/glm-4-5v/providers,false\nLing-1T,-57.167,/models/ling-1t/providers,false\nK2-V2 (high),-57.283,/models/k2-v2/providers,false\nQwen3 30B A3B 2507,-57.433,/models/qwen3-30b-a3b-2507-reasoning/providers,false\nSolar Pro 2,-57.533,/models/solar-pro-2-reasoning/providers,false\nNova 2.0 Lite (medium),-57.633,/models/nova-2-0-lite-reasoning-medium/providers,false\nDevstral Small (May),-58.017,/models/devstral-small-2505/providers,false\nNVIDIA Nemotron Nano 9B V2,-58.383,/models/nvidia-nemotron-nano-9b-v2/providers,false\nK-EXAONE,-58.75,/models/k-exaone/providers,false\nDevstral Small 2,-58.883,/models/devstral-small-2/providers,false\nGPT-4.1 nano,-58.95,/models/gpt-4-1-nano/providers,false\nQwen3 VL 30B A3B,-59.133,/models/qwen3-vl-30b-a3b-reasoning/providers,false\nGemini 2.5 Flash-Lite,-59.45,/models/gemini-2-5-flash-lite/providers,false\nNova 2.0 Omni (medium),-59.7,/models/nova-2-0-omni-reasoning-medium/providers,false\nRing-flash-2.0,-59.767,/models/ring-flash-2-0/providers,false\nApriel-v1.6-15B-Thinker,-59.833,/models/apriel-v1-6-15b-thinker/providers,false\nOlmo 3 32B Think,-60.25,/models/olmo-3-32b-think/providers,false\nNova 2.0 Lite,-60.483,/models/nova-2-0-lite/providers,false\nQwen3 Next 80B A3B,-60.483,/models/qwen3-next-80b-a3b-instruct/providers,false\ngpt-oss-20B (low),-60.6,/models/gpt-oss-20b-low/providers,false\nEXAONE 4.0 32B,-61.417,/models/exaone-4-0-32b-reasoning/providers,false\nQwen3 Omni 30B A3B,-61.767,/models/qwen3-omni-30b-a3b-reasoning/providers,false\nFalcon-H1R-7B,-61.917,/models/falcon-h1r-7b/providers,false\nGranite 4.0 H Small,-62.067,/models/granite-4-0-h-small/providers,false\nMotif-2-12.7B,-62.233,/models/motif-2-12-7b/providers,false\nPhi-4 Mini,-62.7,/models/phi-4-mini/providers,false\nJamba Reasoning 3B,-62.833,/models/jamba-reasoning-3b/providers,false\nLlama 3.2 11B (Vision),-62.967,/models/llama-3-2-instruct-11b-vision/providers,false\nSolar Pro 2,-63.117,/models/solar-pro-2/providers,false\nGLM-4.5-Air,-63.15,/models/glm-4-5-air/providers,false\nGranite 4.0 350M,-63.683,/models/granite-4-0-350m/providers,false\nQwen3 VL 32B,-63.9,/models/qwen3-vl-32b-instruct/providers,false\nMinistral 3 3B,-63.967,/models/ministral-3-3b/providers,false\nQwen3 VL 30B A3B,-64.033,/models/qwen3-vl-30b-a3b-instruct/providers,false\nEXAONE 4.0 32B,-64.3,/models/exaone-4-0-32b/providers,false\ngpt-oss-20B (high),-64.9,/models/gpt-oss-20b/providers,false\nNVIDIA Nemotron 3 Nano,-65.2,/models/nvidia-nemotron-3-nano-30b-a3b/providers,false\nNova 2.0 Omni,-65.233,/models/nova-2-0-omni/providers,false\nReka Flash 3,-65.233,/models/reka-flash-3/providers,false\nDeepSeek R1 0528 Qwen3 8B,-65.317,/models/deepseek-r1-qwen3-8b/providers,false\nK-EXAONE,-65.95,/models/k-exaone-non-reasoning/providers,false\nQwen3 8B,-66.117,/models/qwen3-8b-instruct-reasoning/providers,false\nMistral 7B,-66.25,/models/mistral-7b-instruct/providers,false\nNVIDIA Nemotron Nano 12B v2 VL,-66.35,/models/nvidia-nemotron-nano-12b-v2-vl-reasoning/providers,false\nGPT-5 nano (minimal),-66.367,/models/gpt-5-nano-minimal/providers,false\nMagistral Small 1.2,-66.383,/models/magistral-small-2509/providers,false\nQwen3 30B A3B 2507,-66.8,/models/qwen3-30b-a3b-2507/providers,false\nMinistral 3 14B,-67.383,/models/ministral-3-14b/providers,false\nLing-flash-2.0,-67.45,/models/ling-flash-2-0/providers,false\nGemma 3 27B,-67.95,/models/gemma-3-27b/providers,false\nQwen3 30B,-67.983,/models/qwen3-30b-a3b-instruct/providers,false\nQwen3 14B,-68.3,/models/qwen3-14b-instruct/providers,false\nMolmo2-8B,-69.433,/models/molmo2-8b/providers,false\nQwen3 Omni 30B A3B,-69.75,/models/qwen3-omni-30b-a3b-instruct/providers,false\nMinistral 3 8B,-69.983,/models/ministral-3-8b/providers,false\nLFM2 1.2B,-71.217,/models/lfm2-1-2b/providers,false\nLlama 3 8B,-71.65,/models/llama-3-instruct-8b/providers,false\nNVIDIA Nemotron Nano 12B v2 VL,-73.167,/models/nvidia-nemotron-nano-12b-v2-vl/providers,false\nOlmo 3 7B Think,-73.967,/models/olmo-3-7b-think/providers,false\nGranite 4.0 H 1B,-74.383,/models/granite-4-0-h-nano-1b/providers,false\nLFM2.5-1.2B-Instruct,-74.75,/models/lfm2-5-1-2b-instruct/providers,false\nQwen3 8B,-75.4,/models/qwen3-8b-instruct/providers,false\nGemma 3 12B,-77.25,/models/gemma-3-12b/providers,false\nLFM2 8B A1B,-77.517,/models/lfm2-8b-a1b/providers,false\nOlmo 3 7B,-78.183,/models/olmo-3-7b-instruct/providers,false\nGranite 4.0 Micro,-78.35,/models/granite-4-0-micro/providers,false\nQwen3 1.7B,-78.35,/models/qwen3-1.7b-instruct-reasoning/providers,false\nGranite 3.3 8B,-78.95,/models/granite-3-3-8b-instruct/providers,false\nGemma 3 1B,-80.25,/models/gemma-3-1b/providers,false\nGemma 3n E2B,-80.617,/models/gemma-3n-e2b/providers,false\nGemma 3n E4B,-81.983,/models/gemma-3n-e4b/providers,false\nQwen3 1.7B,-82.367,/models/qwen3-1.7b-instruct/providers,false\nQwen3 0.6B,-82.45,/models/qwen3-0.6b-instruct-reasoning/providers,false\nExaone 4.0 1.2B,-82.467,/models/exaone-4-0-1-2b-reasoning/providers,false\nGranite 4.0 1B,-83,/models/granite-4-0-nano-1b/providers,false\nExaone 4.0 1.2B,-83.167,/models/exaone-4-0-1-2b/providers,false\nGemma 3 4B,-83.817,/models/gemma-3-4b/providers,false\nLFM2.5-VL-1.6B,-85.6,/models/lfm2-5-vl-1-6b/providers,false\nQwen3 0.6B,-86.85,/models/qwen3-0.6b-instruct/providers,false\nGranite 4.0 H 350M,-89.467,/models/granite-4-0-h-350m/providers,false"}

Artificial Analysis Openness Index: Results

Openness Index assesses model openness on a 0 to 100 normalized scale (higher is more open)

Intelligence Index Comparisons

Intelligence vs. Price

Artificial Analysis Intelligence Index; Price: USD per 1M Tokens
Most attractive quadrant
Alibaba
Amazon
Anthropic
DeepSeek
Google
Kimi
KwaiKAT
Meta
MiniMax
Mistral
NVIDIA
OpenAI
xAI
Xiaomi
Z AI

While higher intelligence models are typically more expensive, they do not all follow the same price-quality curve.

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Intelligence Index Token Use & Cost

Output Tokens Used to Run Artificial Analysis Intelligence Index

Tokens used to run all evaluations in the Artificial Analysis Intelligence Index
Answer Tokens
Reasoning Tokens

The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).

Cost to Run Artificial Analysis Intelligence Index

Cost (USD) to run all evaluations in the Artificial Analysis Intelligence Index
Input Cost
Output Cost
Reasoning Cost

The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).

Context Window

Context Window

Context Window: Tokens Limit; Higher is better

Larger context windows are relevant to RAG (Retrieval Augmented Generation) LLM workflows which typically involve reasoning and information retrieval of large amounts of data.

Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

{"@context":"https://schema.org","@type":"Dataset","name":"Context Window","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Context Window: Tokens Limit; Higher is better","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}

Pricing: Input and Output Prices

Price: USD per 1M Tokens
Input price
Output price

Price per token included in the request/message sent to the API, represented as USD per million Tokens.

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Intelligence vs. Price (Log Scale)

Artificial Analysis Intelligence Index; Price: USD per 1M Tokens; Inspired by prior analysis by Swyx
Most attractive quadrant
Alibaba
Amazon
Anthropic
DeepSeek
Google
Kimi
KwaiKAT
Meta
MiniMax
Mistral
NVIDIA
OpenAI
xAI
Xiaomi
Z AI

While higher intelligence models are typically more expensive, they do not all follow the same price-quality curve.

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Speed

Measured by Output Speed (tokens per second)

Output Speed

Output Tokens per Second; Higher is better

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

{"@context":"https://schema.org","@type":"Dataset","name":"Output Speed","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Output Tokens per Second; Higher is better","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}

Output Speed vs. Price

Output Speed: Output Tokens per Second; Price: USD per 1M Tokens
Most attractive quadrant
Alibaba
Amazon
Anthropic
DeepSeek
Google
Kimi
KwaiKAT
Meta
MiniMax
Mistral
NVIDIA
OpenAI
xAI
Xiaomi
Z AI

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).

Latency

Measured by Time (seconds) to First Token

Latency: Time To First Answer Token

Seconds to First Answer Token Received; Accounts for Reasoning Model 'Thinking' time
Input processing
Thinking (reasoning models, when applicable)

Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.

End-to-End Response Time

Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed

End-to-End Response Time

Seconds to Output 500 Tokens, including reasoning model 'thinking' time; Lower is better
'Thinking' time (reasoning models)
Input processing time
Outputting time

Seconds to receive a 500 token response. Key components:

  • Input time: Time to receive the first response token
  • Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
  • Answer time: Time to generate 500 output tokens, based on output speed

Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).

Model Size (Open Weights Models Only)

Model Size: Total and Active Parameters

Comparison between total model parameters and parameters active during inference
Active Parameters
Passive Parameters

The total number of trainable weights and biases in the model, expressed in billions. These parameters are learned during training and determine the model's ability to process and generate responses.

The number of parameters actually executed during each inference forward pass, expressed in billions. For Mixture of Experts (MoE) models, a routing mechanism selects a subset of experts per token, resulting in fewer active than total parameters. Dense models use all parameters, so active equals total.

Further details
Model NameFurther analysis
OpenAI logoOpenAI
OpenAI logogpt-oss-20B (high)
OpenAI logoo3
OpenAI logogpt-oss-120B (high)
OpenAI logoGPT-5.2 (medium)
OpenAI logoGPT-5.2 (xhigh)
OpenAI logogpt-oss-20B (low)
OpenAI logoGPT-5.2 (Non-reasoning)
OpenAI logogpt-oss-120B (low)
OpenAI logoGPT-5 mini (high)
OpenAI logoGPT-5 nano (high)
OpenAI logoGPT-5.1 Codex mini (high)
OpenAI logoGPT-5.2 Codex (xhigh)
OpenAI logoGPT-5.1 Codex (high)
OpenAI logoo1
OpenAI logoo1-preview
OpenAI logoo1-mini
OpenAI logoGPT-4o (Aug '24)
OpenAI logoGPT-4o (May '24)
OpenAI logoGPT-4 Turbo
OpenAI logoGPT-4o (Nov '24)
OpenAI logoGPT-4o mini
OpenAI logoGPT-3.5 Turbo
OpenAI logoGPT-4.1
OpenAI logoGPT-5 (minimal)
OpenAI logoGPT-4o Realtime (Dec '24)
OpenAI logoGPT-5.1 (high)
OpenAI logoGPT-4.1 mini
OpenAI logoo4-mini (high)
OpenAI logoGPT-5 nano (minimal)
OpenAI logoGPT-5 (high)
OpenAI logoGPT-5 (low)
OpenAI logoGPT-5 Codex (high)
OpenAI logoGPT-4.1 nano
OpenAI logoGPT-4o mini Realtime (Dec '24)
OpenAI logoGPT-5 (medium)
OpenAI logoGPT-5 mini (minimal)
OpenAI logoGPT-3.5 Turbo (0613)
OpenAI logoGPT-4o (March 2025, chatgpt-4o-latest)
OpenAI logoo1-pro
OpenAI logoGPT-5 (ChatGPT)
OpenAI logoGPT-4
OpenAI logoGPT-4.5 (Preview)
OpenAI logoGPT-5 nano (medium)
OpenAI logoGPT-5.1 (Non-reasoning)
OpenAI logoGPT-4o (ChatGPT)
OpenAI logoo3-mini (high)
OpenAI logoo3-mini
OpenAI logoo3-pro
OpenAI logoGPT-5 mini (medium)
xAI logoxAI
xAI logoGrok-1
xAI logoGrok Voice Agent
xAI logoGrok 4.1 Fast (Reasoning)
xAI logoGrok 4
xAI logoGrok Code Fast 1
xAI logoGrok 3 mini Reasoning (high)
xAI logoGrok 4.1 Fast (Non-reasoning)
xAI logoGrok Beta
xAI logoGrok 2 (Dec '24)
xAI logoGrok 3
xAI logoGrok 4 Fast (Reasoning)
xAI logoGrok 4 Fast (Non-reasoning)
xAI logoGrok 3 Reasoning Beta
Meta logoMeta
Meta logoLlama 3.3 Instruct 70B
Meta logoLlama 3.1 Instruct 405B
Meta logoLlama 3.2 Instruct 90B (Vision)
Meta logoLlama 3.2 Instruct 11B (Vision)
Meta logoLlama 4 Scout
Meta logoLlama 4 Maverick
Meta logoLlama 65B
Meta logoLlama 3.1 Instruct 70B
Meta logoLlama 3.1 Instruct 8B
Meta logoLlama 3.2 Instruct 3B
Meta logoLlama 3 Instruct 70B
Meta logoLlama 3 Instruct 8B
Meta logoLlama 3.2 Instruct 1B
Meta logoLlama 2 Chat 13B
Meta logoLlama 2 Chat 70B
Meta logoLlama 2 Chat 7B
Google logoGoogle
Google logoGemini 2.5 Flash-Lite Preview (Sep '25) (Reasoning)
Google logoGemini 3 Flash Preview (Reasoning)
Google logoGemma 3 27B Instruct
Google logoGemma 3 12B Instruct
Google logoGemma 3n E4B Instruct
Google logoGemma 3 1B Instruct
Google logoGemma 3 4B Instruct
Google logoGemini 2.5 Pro
Google logoGemini 3 Pro Preview (high)
Google logoGemini 3 Flash Preview (Non-reasoning)
Google logoGemini 3 Pro Preview (low)
Google logoGemma 3n E2B Instruct
Google logoGemma 3 270M
Google logoGemini 2.5 Flash-Lite Preview (Sep '25) (Non-reasoning)
Google logoGemini 2.0 Pro Experimental (Feb '25)
Google logoGemini 2.0 Flash (experimental)
Google logoGemini 1.5 Pro (Sep '24)
Google logoGemini 2.0 Flash-Lite (Preview)
Google logoGemini 2.0 Flash (Feb '25)
Google logoGemini 1.5 Flash (Sep '24)
Google logoGemini 1.5 Flash-8B
Google logoGemini 2.5 Flash-Lite (Non-reasoning)
Google logoGemini 2.5 Flash Preview (Sep '25) (Non-reasoning)
Google logoGemini 2.5 Flash Preview (Non-reasoning)
Google logoGemini 2.5 Flash Preview (Reasoning)
Google logoGemma 3n E4B Instruct Preview (May '25)
Google logoGemini 1.5 Flash (May '24)
Google logoGemini 2.5 Pro Preview (Mar' 25)
Google logoGemini 2.5 Flash (Reasoning)
Google logoGemini 2.5 Flash Preview (Sep '25) (Reasoning)
Google logoGemini 2.5 Flash (Non-reasoning)
Google logoGemini 1.5 Pro (May '24)
Google logoGemini 2.0 Flash Thinking Experimental (Jan '25)
Google logoGemini 2.0 Flash Thinking Experimental (Dec '24)
Google logoGemini 1.0 Ultra
Google logoGemini 2.5 Flash-Lite (Reasoning)
Google logoGemini 1.0 Pro
Google logoGemini 2.5 Pro Preview (May' 25)
Google logoPALM-2
Google logoGemini 2.0 Flash-Lite (Feb '25)
Anthropic logoAnthropic
Anthropic logoClaude 4.5 Haiku (Reasoning)
Anthropic logoClaude Opus 4.5 (Non-reasoning)
Anthropic logoClaude 4.5 Sonnet (Non-reasoning)
Anthropic logoClaude 4.5 Sonnet (Reasoning)
Anthropic logoClaude Opus 4.5 (Reasoning)
Anthropic logoClaude 4.5 Haiku (Non-reasoning)
Anthropic logoClaude 3.5 Sonnet (Oct '24)
Anthropic logoClaude 3.5 Sonnet (June '24)
Anthropic logoClaude 3 Opus
Anthropic logoClaude 3.5 Haiku
Anthropic logoClaude 3 Sonnet
Anthropic logoClaude 3 Haiku
Anthropic logoClaude Instant
Anthropic logoClaude 2.0
Anthropic logoClaude 4 Sonnet (Non-reasoning)
Anthropic logoClaude 4.1 Opus (Non-reasoning)
Anthropic logoClaude 4 Opus (Reasoning)
Anthropic logoClaude 4.1 Opus (Reasoning)
Anthropic logoClaude 3.7 Sonnet (Non-reasoning)
Anthropic logoClaude 4 Opus (Non-reasoning)
Anthropic logoClaude 4 Sonnet (Reasoning)
Anthropic logoClaude 3.7 Sonnet (Reasoning)
Anthropic logoClaude 2.1
Mistral logoMistral
Mistral logoMistral Small 3.2
Mistral logoMistral Medium 3.1
Mistral logoMinistral 3 14B
Mistral logoMinistral 3 8B
Mistral logoMinistral 3 3B
Mistral logoMistral Large 3
Mistral logoMagistral Small 1.2
Mistral logoDevstral 2
Mistral logoDevstral Small 2
Mistral logoMagistral Medium 1.2
Mistral logoMistral Large 2 (Nov '24)
Mistral logoMistral Large 2 (Jul '24)
Mistral logoPixtral Large
Mistral logoMistral Small 3
Mistral logoMistral Small (Sep '24)
Mistral logoMixtral 8x22B Instruct
Mistral logoMistral Small (Feb '24)
Mistral logoMistral Large (Feb '24)
Mistral logoMixtral 8x7B Instruct
Mistral logoMistral 7B Instruct
Mistral logoDevstral Medium
Mistral logoMistral Saba
Mistral logoMistral Small 3.1
Mistral logoDevstral Small (Jul '25)
Mistral logoMagistral Medium 1
Mistral logoMagistral Small 1
Mistral logoMistral Medium
Mistral logoMistral Medium 3
Mistral logoDevstral Small (May '25)
DeepSeek logoDeepSeek
DeepSeek logoDeepSeek R1 Distill Llama 70B
DeepSeek logoDeepSeek R1 0528 (May '25)
DeepSeek logoDeepSeek V3.2 (Reasoning)
DeepSeek logoDeepSeek V3.2 (Non-reasoning)
DeepSeek logoDeepSeek V3.2 Speciale
DeepSeek logoDeepSeek-OCR
DeepSeek logoDeepSeek R1 0528 Qwen3 8B
DeepSeek logoDeepSeek R1 Distill Qwen 32B
DeepSeek logoDeepSeek V3 (Dec '24)
DeepSeek logoDeepSeek R1 Distill Qwen 14B
DeepSeek logoDeepSeek-V2.5 (Dec '24)
DeepSeek logoDeepSeek-Coder-V2
DeepSeek logoDeepSeek R1 Distill Llama 8B
DeepSeek logoDeepSeek LLM 67B Chat (V1)
DeepSeek logoDeepSeek R1 Distill Qwen 1.5B
DeepSeek logoDeepSeek R1 (Jan '25)
DeepSeek logoDeepSeek V3 0324
DeepSeek logoDeepSeek V3.1 (Non-reasoning)
DeepSeek logoDeepSeek V3.2 Exp (Non-reasoning)
DeepSeek logoDeepSeek V3.1 Terminus (Non-reasoning)
DeepSeek logoDeepSeek V3.1 Terminus (Reasoning)
DeepSeek logoDeepSeek V3.1 (Reasoning)
DeepSeek logoDeepSeek-V2-Chat
DeepSeek logoDeepSeek-V2.5
DeepSeek logoDeepSeek V3.2 Exp (Reasoning)
DeepSeek logoDeepSeek Coder V2 Lite Instruct
Perplexity logoPerplexity
Perplexity logoR1 1776
Perplexity logoSonar Reasoning Pro
Perplexity logoSonar Pro
Perplexity logoSonar Reasoning
Perplexity logoSonar
TII UAE logoTII UAE
TII UAE logoFalcon-H1R-7B
Amazon logoAmazon
Amazon logoNova Micro
Amazon logoNova 2.0 Omni (medium)
Amazon logoNova Premier
Amazon logoNova 2.0 Omni (low)
Amazon logoNova 2.0 Pro Preview (Non-reasoning)
Amazon logoNova 2.0 Lite (low)
Amazon logoNova 2.0 Pro Preview (medium)
Amazon logoNova 2.0 Omni (Non-reasoning)
Amazon logoNova 2.0 Pro Preview (low)
Amazon logoNova 2.0 Lite (medium)
Amazon logoNova 2.0 Lite (Non-reasoning)
Amazon logoNova Pro
Amazon logoNova Lite
Microsoft Azure logoMicrosoft Azure
Microsoft Azure logoPhi-4
Microsoft Azure logoPhi-4 Mini Instruct
Microsoft Azure logoPhi-4 Multimodal Instruct
Microsoft Azure logoPhi-3 Mini Instruct 3.8B
Liquid AI logoLiquid AI
Liquid AI logoLFM2.5-VL-1.6B
Liquid AI logoLFM2.5-1.2B-Instruct
Liquid AI logoLFM2 2.6B
Liquid AI logoLFM2 1.2B
Liquid AI logoLFM2 8B A1B
Liquid AI logoLFM 40B
Upstage logoUpstage
Upstage logoSolar Pro 2 (Reasoning)
Upstage logoSolar Open 100B (Reasoning)
Upstage logoSolar Pro 2 (Non-reasoning)
Upstage logoSolar Mini
Upstage logoSolar Pro 2 (Preview) (Non-reasoning)
Upstage logoSolar Pro 2 (Preview) (Reasoning)
MiniMax logoMiniMax
MiniMax logoMiniMax-M2.1
MiniMax logoMiniMax M1 80k
MiniMax logoMiniMax M1 40k
MiniMax logoMiniMax-M2
NVIDIA logoNVIDIA
NVIDIA logoLlama 3.1 Nemotron Instruct 70B
NVIDIA logoNVIDIA Nemotron Nano 12B v2 VL (Non-reasoning)
NVIDIA logoNVIDIA Nemotron Nano 9B V2 (Non-reasoning)
NVIDIA logoNVIDIA Nemotron 3 Nano 30B A3B (Reasoning)
NVIDIA logoNVIDIA Nemotron Nano 9B V2 (Reasoning)
NVIDIA logoLlama 3.1 Nemotron Ultra 253B v1 (Reasoning)
NVIDIA logoLlama Nemotron Super 49B v1.5 (Non-reasoning)
NVIDIA logoNVIDIA Nemotron Nano 12B v2 VL (Reasoning)
NVIDIA logoNVIDIA Nemotron 3 Nano 30B A3B (Non-reasoning)
NVIDIA logoLlama 3.3 Nemotron Super 49B v1 (Non-reasoning)
NVIDIA logoLlama 3.1 Nemotron Nano 4B v1.1 (Reasoning)
NVIDIA logoLlama Nemotron Super 49B v1.5 (Reasoning)
NVIDIA logoLlama 3.3 Nemotron Super 49B v1 (Reasoning)
Kimi logoKimi
Kimi logoKimi K2 Thinking
Kimi logoKimi K2 0905
Kimi logoKimi Linear 48B A3B Instruct
Kimi logoKimi K2
Allen Institute for AI logoAllen Institute for AI
Allen Institute for AI logoMolmo2-8B
Allen Institute for AI logoOlmo 3.1 32B Think
Allen Institute for AI logoOlmo 3 7B Think
Allen Institute for AI logoOlmo 3 7B Instruct
Allen Institute for AI logoOlmo 3.1 32B Instruct
Allen Institute for AI logoMolmo 7B-D
Allen Institute for AI logoLlama 3.1 Tulu3 405B
Allen Institute for AI logoOlmo 3 32B Think
Allen Institute for AI logoOLMo 2 7B
Allen Institute for AI logoOLMo 2 32B
IBM logoIBM
IBM logoGranite 4.0 H 1B
IBM logoGranite 4.0 H 350M
IBM logoGranite 4.0 H Small
IBM logoGranite 4.0 350M
IBM logoGranite 4.0 1B
IBM logoGranite 4.0 Micro
IBM logoGranite 3.3 8B (Non-reasoning)
Reka AI logoReka AI
Reka AI logoReka Flash 3
Reka AI logoReka Flash (Sep '24)
Nous Research logoNous Research
Nous Research logoDeepHermes 3 - Llama-3.1 8B Preview (Non-reasoning)
Nous Research logoDeepHermes 3 - Mistral 24B Preview (Non-reasoning)
Nous Research logoHermes 4 - Llama-3.1 405B (Non-reasoning)
Nous Research logoHermes 4 - Llama-3.1 405B (Reasoning)
Nous Research logoHermes 4 - Llama-3.1 70B (Reasoning)
Nous Research logoHermes 4 - Llama-3.1 70B (Non-reasoning)
Nous Research logoHermes 3 - Llama-3.1 70B
LG AI Research logoLG AI Research
LG AI Research logoEXAONE 4.0 32B (Reasoning)
LG AI Research logoEXAONE 4.0 32B (Non-reasoning)
LG AI Research logoExaone 4.0 1.2B (Non-reasoning)
LG AI Research logoK-EXAONE (Non-reasoning)
LG AI Research logoExaone 4.0 1.2B (Reasoning)
LG AI Research logoK-EXAONE (Reasoning)
Xiaomi logoXiaomi
Xiaomi logoMiMo-V2-Flash (Reasoning)
Xiaomi logoMiMo-V2-Flash (Non-reasoning)
Baidu logoBaidu
Baidu logoERNIE 4.5 300B A47B
Baidu logoERNIE 5.0 Thinking Preview
Deep Cogito logoDeep Cogito
Deep Cogito logoCogito v2.1 (Reasoning)
KwaiKAT logoKwaiKAT
KwaiKAT logoKAT-Coder-Pro V1
Prime Intellect logoPrime Intellect
Prime Intellect logoINTELLECT-3
Motif Technologies logoMotif Technologies
Motif Technologies logoMotif-2-12.7B-Reasoning
MBZUAI Institute of Foundation Models logoMBZUAI Institute of Foundation Models
MBZUAI Institute of Foundation Models logoK2-V2 (high)
MBZUAI Institute of Foundation Models logoK2-V2 (medium)
MBZUAI Institute of Foundation Models logoK2-V2 (low)
Korea Telecom logoKorea Telecom
Korea Telecom logoMi:dm K 2.5 Pro
Korea Telecom logoMi:dm K 2.5 Pro Preview
Naver logoNaver
Naver logoHyperCLOVA X SEED Think (32B)
Z AI logoZ AI
Z AI logoGLM-4.6V (Non-reasoning)
Z AI logoGLM-4.6V (Reasoning)
Z AI logoGLM-4.7 (Non-reasoning)
Z AI logoGLM-4.5-Air
Z AI logoGLM-4.7 (Reasoning)
Z AI logoGLM-4.5V (Non-reasoning)
Z AI logoGLM-4.6 (Reasoning)
Z AI logoGLM-4.5V (Reasoning)
Z AI logoGLM-4.6 (Non-reasoning)
Z AI logoGLM-4.5 (Reasoning)
Cohere logoCohere
Cohere logoCommand A
Cohere logoCommand-R+ (Apr '24)
Cohere logoCommand-R (Mar '24)
ServiceNow logoServiceNow
ServiceNow logoApriel-v1.6-15B-Thinker
ServiceNow logoApriel-v1.5-15B-Thinker
AI21 Labs logoAI21 Labs
AI21 Labs logoJamba Reasoning 3B
AI21 Labs logoJamba 1.7 Mini
AI21 Labs logoJamba 1.7 Large
AI21 Labs logoJamba 1.5 Large
AI21 Labs logoJamba 1.5 Mini
AI21 Labs logoJamba 1.6 Mini
AI21 Labs logoJamba 1.6 Large
Alibaba logoAlibaba
Alibaba logoQwen3 VL 32B (Reasoning)
Alibaba logoQwen3 VL 32B Instruct
Alibaba logoQwen3 235B A22B 2507 Instruct
Alibaba logoQwen3 Coder 480B A35B Instruct
Alibaba logoQwen3 Next 80B A3B Instruct
Alibaba logoQwen3 Next 80B A3B (Reasoning)
Alibaba logoQwen3 235B A22B 2507 (Reasoning)
Alibaba logoQwen3 4B 2507 (Reasoning)
Alibaba logoQwen3 VL 30B A3B (Reasoning)
Alibaba logoQwen3 VL 235B A22B Instruct
Alibaba logoQwen3 Omni 30B A3B (Reasoning)
Alibaba logoQwen3 Omni 30B A3B Instruct
Alibaba logoQwen3 0.6B (Non-reasoning)
Alibaba logoQwen3 1.7B (Reasoning)
Alibaba logoQwen Chat 14B
Alibaba logoQwen3 VL 30B A3B Instruct
Alibaba logoQwen3 1.7B (Non-reasoning)
Alibaba logoQwen3 30B A3B 2507 (Reasoning)
Alibaba logoQwen3 30B A3B 2507 Instruct
Alibaba logoQwen3 VL 4B Instruct
Alibaba logoQwen3 VL 235B A22B (Reasoning)
Alibaba logoQwen3 VL 8B (Reasoning)
Alibaba logoQwen3 Max Thinking
Alibaba logoQwen3 0.6B (Reasoning)
Alibaba logoQwen3 VL 4B (Reasoning)
Alibaba logoQwen3 Coder 30B A3B Instruct
Alibaba logoQwen3 4B 2507 Instruct
Alibaba logoQwen3 Max
Alibaba logoQwen3 VL 8B Instruct
Alibaba logoQwen2.5 Max
Alibaba logoQwen2.5 Instruct 72B
Alibaba logoQwen2.5 Coder Instruct 32B
Alibaba logoQwen2.5 Turbo
Alibaba logoQwen2 Instruct 72B
Alibaba logoQwen3 32B (Non-reasoning)
Alibaba logoQwQ 32B-Preview
Alibaba logoQwen Chat 72B
Alibaba logoQwen3 32B (Reasoning)
Alibaba logoQwen3 235B A22B (Non-reasoning)
Alibaba logoQwen3 235B A22B (Reasoning)
Alibaba logoQwen1.5 Chat 110B
Alibaba logoQwen3 4B (Reasoning)
Alibaba logoQwen2.5 Instruct 32B
Alibaba logoQwen3 30B A3B (Non-reasoning)
Alibaba logoQwen3 4B (Non-reasoning)
Alibaba logoQwen3 14B (Reasoning)
Alibaba logoQwQ 32B
Alibaba logoQwen3 30B A3B (Reasoning)
Alibaba logoQwen3 8B (Reasoning)
Alibaba logoQwen3 8B (Non-reasoning)
Alibaba logoQwen3 Max (Preview)
Alibaba logoQwen3 14B (Non-reasoning)
Alibaba logoQwen2.5 Coder Instruct 7B
InclusionAI logoInclusionAI
InclusionAI logoLing-mini-2.0
InclusionAI logoLing-flash-2.0
InclusionAI logoRing-flash-2.0
InclusionAI logoLing-1T
InclusionAI logoRing-1T
ByteDance Seed logoByteDance Seed
ByteDance Seed logoDoubao-Seed-1.8
ByteDance Seed logoDoubao Seed Code
ByteDance Seed logoSeed-OSS-36B-Instruct
OpenChat logoOpenChat
OpenChat logoOpenChat 3.5 (1210)
Databricks logoDatabricks
Databricks logoDBRX Instruct
Snowflake logoSnowflake
Snowflake logoArctic Instruct

Models compared: OpenAI: GPT 4o Audio, GPT 4o Realtime, GPT 4o Speech Pipeline, GPT Realtime, GPT Realtime Mini (Oct '25), GPT-3.5 Turbo, GPT-3.5 Turbo (0125), GPT-3.5 Turbo (0301), GPT-3.5 Turbo (0613), GPT-3.5 Turbo (1106), GPT-3.5 Turbo Instruct, GPT-4, GPT-4 Turbo, GPT-4 Turbo (0125), GPT-4 Turbo (1106), GPT-4 Vision, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano, GPT-4.5 (Preview), GPT-4o (Apr), GPT-4o (Aug), GPT-4o (ChatGPT), GPT-4o (Mar), GPT-4o (May), GPT-4o (Nov), GPT-4o Realtime (Dec), GPT-4o mini, GPT-4o mini Realtime (Dec), GPT-5 (ChatGPT), GPT-5 (high), GPT-5 (low), GPT-5 (medium), GPT-5 (minimal), GPT-5 Codex (high), GPT-5 Pro (high), GPT-5 mini (high), GPT-5 mini (medium), GPT-5 mini (minimal), GPT-5 nano (high), GPT-5 nano (medium), GPT-5 nano (minimal), GPT-5.1, GPT-5.1 (high), GPT-5.1 Codex (high), GPT-5.1 Codex mini (high), GPT-5.2, GPT-5.2 (high), GPT-5.2 (medium), GPT-5.2 (xhigh), GPT-5.2 Codex (xhigh), gpt-oss-120B (high), gpt-oss-120B (low), gpt-oss-20B (high), gpt-oss-20B (low), o1, o1-mini, o1-preview, o1-pro, o3, o3-mini, o3-mini (high), o3-pro, and o4-mini (high), Meta: Code Llama 70B, Llama 2 Chat 13B, Llama 2 Chat 70B, Llama 2 Chat 7B, Llama 3 70B, Llama 3 8B, Llama 3.1 405B, Llama 3.1 70B, Llama 3.1 8B, Llama 3.2 11B (Vision), Llama 3.2 1B, Llama 3.2 3B, Llama 3.2 90B (Vision), Llama 3.3 70B, Llama 4 Behemoth, Llama 4 Maverick, Llama 4 Scout, and Llama 65B, Google: Gemini 1.0 Pro, Gemini 1.0 Ultra, Gemini 1.5 Flash (May), Gemini 1.5 Flash (Sep), Gemini 1.5 Flash-8B, Gemini 1.5 Pro (May), Gemini 1.5 Pro (Sep), Gemini 2.0 Flash, Gemini 2.0 Flash (exp), Gemini 2.0 Flash Thinking exp. (Dec), Gemini 2.0 Flash Thinking exp. (Jan), Gemini 2.0 Flash-Lite (Feb), Gemini 2.0 Flash-Lite (Preview), Gemini 2.0 Pro Experimental, Gemini 2.5 Flash, Gemini 2.5 Flash Live Preview, Gemini 2.5 Flash Native Audio, Gemini 2.5 Flash Native Audio Dialog, Gemini 2.5 Flash (Sep), Gemini 2.5 Flash-Lite, Gemini 2.5 Flash-Lite (Sep), Gemini 2.5 Pro, Gemini 2.5 Pro (Mar), Gemini 2.5 Pro (May), Gemini 3 Flash, Gemini 3 Pro Preview (high), Gemini 3 Pro Preview (low), Gemini Experimental (Nov), Gemma 2 27B, Gemma 2 2B, Gemma 2 9B, Gemma 3 12B, Gemma 3 1B, Gemma 3 270M, Gemma 3 27B, Gemma 3 4B, Gemma 3n E2B, Gemma 3n E4B, Gemma 3n E4B (May), Gemma 7B, PALM-2, and Whisperwind, Anthropic: Claude 2.0, Claude 2.1, Claude 3 Haiku, Claude 3 Opus, Claude 3 Sonnet, Claude 3.5 Haiku, Claude 3.5 Sonnet (June), Claude 3.5 Sonnet (Oct), Claude 3.7 Sonnet, Claude 4 Opus, Claude 4 Sonnet, Claude 4.1 Opus, Claude 4.5 Haiku, Claude 4.5 Sonnet, Claude Instant, Claude Opus 4.5, claude-flan-v3-p, claude-flan-v3-p (low), and claude-flan-v3-p (medium), Mistral: Codestral (Jan), Codestral (May), Codestral-Mamba, Devstral 2, Devstral Medium, Devstral Small, Devstral Small (May), Devstral Small 2, Magistral Medium 1, Magistral Medium 1.1, Magistral Medium 1.2, Magistral Small 1, Magistral Small 1.1, Magistral Small 1.2, Ministral 3 14B, Ministral 3 3B, Ministral 3 8B, Ministral 3B, Ministral 8B, Mistral 7B, Mistral Large (Feb), Mistral Large 2 (Jul), Mistral Large 2 (Nov), Mistral Large 3, Mistral Medium, Mistral Medium 3, Mistral Medium 3.1, Mistral NeMo, Mistral Saba, Mistral Small (Feb), Mistral Small (Sep), Mistral Small 3, Mistral Small 3.1, Mistral Small 3.2, Mixtral 8x22B, Mixtral 8x7B, Pixtral 12B, and Pixtral Large, DeepSeek: DeepSeek Coder V2 Lite, DeepSeek LLM 67B (V1), DeepSeek Prover V2 671B, DeepSeek R1 (FP4), DeepSeek R1 (Jan), DeepSeek R1 0528, DeepSeek R1 0528 Qwen3 8B, DeepSeek R1 Distill Llama 70B, DeepSeek R1 Distill Llama 8B, DeepSeek R1 Distill Qwen 1.5B, DeepSeek R1 Distill Qwen 14B, DeepSeek R1 Distill Qwen 32B, DeepSeek V3 (Dec), DeepSeek V3 0324, DeepSeek V3.1, DeepSeek V3.1 Terminus, DeepSeek V3.2, DeepSeek V3.2 Exp, DeepSeek V3.2 Speciale, DeepSeek-Coder-V2, DeepSeek-OCR, DeepSeek-V2, DeepSeek-V2.5, DeepSeek-V2.5 (Dec), DeepSeek-VL2, and Janus Pro 7B, Perplexity: PPLX-70B Online, PPLX-7B-Online, R1 1776, Sonar, Sonar 3.1 Huge, Sonar 3.1 Large, Sonar 3.1 Small , Sonar Large, Sonar Pro, Sonar Reasoning, Sonar Reasoning Pro, and Sonar Small, TII UAE: Falcon-H1R-7B, xAI: Grok 2, Grok 3, Grok 3 Reasoning Beta, Grok 3 mini, Grok 3 mini Reasoning (low), Grok 3 mini Reasoning (high), Grok 4, Grok 4 Fast, Grok 4 Fast 1111 (Reasoning), Grok 4 mini (0908), Grok 4.1 Fast, Grok 4.1 Fast v4, Grok Beta, Grok Code Fast 1, Grok Voice Agent, Grok-1, and test model, OpenChat: OpenChat 3.5, Amazon: Nova 2.0 Lite, Nova 2.0 Lite (high), Nova 2.0 Lite (low), Nova 2.0 Lite (medium), Nova 2.0 Omni, Nova 2.0 Omni (high), Nova 2.0 Omni (low), Nova 2.0 Omni (medium), Nova 2.0 Pro Preview, Nova 2.0 Pro Preview (high), Nova 2.0 Pro Preview (low), Nova 2.0 Pro Preview (medium), Nova 2.0 Realtime, Nova 2.0 Sonic, Nova Lite, Nova Micro, Nova Premier, and Nova Pro, Microsoft Azure: Phi-3 Medium 14B, Phi-3 Mini, Phi-4, Phi-4 Mini, Phi-4 Multimodal, Phi-4 mini reasoning, Phi-4 reasoning, Phi-4 reasoning plus, Yosemite-1-1, Yosemite-1-1-d36, Yosemite 1.1 d36 Updated, Yosemite-1-1-d64, Yosemite 1.1 d64 Updated, and Yosemite, Liquid AI: LFM 1.3B, LFM 3B, LFM 40B, LFM2 1.2B, LFM2 2.6B, LFM2 8B A1B, LFM2.5-1.2B-Instruct, LFM2.5-1.2B-Thinking, and LFM2.5-VL-1.6B, Upstage: Solar Mini, Solar Open 100B, Solar Pro, Solar Pro (Nov), Solar Pro 2, and Solar Pro 2 , Databricks: DBRX, MiniMax: MiniMax M1 40k, MiniMax M1 80k, MiniMax-M2, MiniMax-M2.1, and MiniMax-Text-01, NVIDIA: Cosmos Nemotron 34B, Llama 3.1 Nemotron 70B, Llama 3.1 Nemotron Nano 4B v1.1, Llama 3.1 Nemotron Nano 8B, Llama 3.3 Nemotron Nano 8B, Llama Nemotron Ultra, Llama 3.3 Nemotron Super 49B, Llama Nemotron Super 49B v1.5, NVIDIA Nemotron 3 Nano, NVIDIA Nemotron Nano 12B v2 VL, NVIDIA Nemotron Nano 9B V2, and Nemotron Nano V3 (30B A3B), StepFun: Step-2, Step-2-Mini, Step-Audio R1.1 (Realtime), Step3, step-1-128k, step-1-256k, step-1-32k, step-1-8k, step-1-flash, step-2-16k-exp, and step-r1-v-mini, IBM: Granite 3.0 2B, Granite 3.0 8B, Granite 3.3 8B, Granite 4.0 1B, Granite 4.0 350M, Granite 4.0 8B, Granite 4.0 H 1B, Granite 4.0 H 350M, Granite 4.0 H Small, Granite 4.0 Micro, Granite 4.0 Tiny, and Granite Vision 3.3 2B, Inceptionlabs: Mercury, Mercury Coder Mini, Mercury Coder Small, and Mercury Instruct, Reka AI: Reka Core, Reka Edge, Reka Flash (Feb), Reka Flash, Reka Flash 3, and Reka Flash 3.1, LG AI Research: EXAONE 4.0 32B, EXAONE Deep 32B, Exaone 4.0 1.2B, and K-EXAONE, Xiaomi: MiMo 7B RL and MiMo-V2-Flash, Baidu: ERNIE 4.5, ERNIE 4.5 0.3B, ERNIE 4.5 21B A3B, ERNIE 4.5 300B A47B, ERNIE 4.5 VL 28B A3B, ERNIE 4.5 VL 424B A47B, ERNIE 5.0 Thinking Preview, and ERNIE X1, Baichuan: Baichuan 4 and Baichuan M1 (Preview), vercel: v0-1.0-md, Apple: Apple On-Device and FastVLM, Other: LLaVA-v1.5-7B, Tencent: Hunyuan A13B, Hunyuan 80B A13B, Hunyuan T1, and Hunyuan-TurboS, Prime Intellect: INTELLECT-3, Motif Technologies: Motif-2-12.7B, Korea Telecom: Mi:dm K 2.5 Pro and Mi:dm K 2.5 Pro Preview, Z AI: GLM-4 32B, GLM-4 9B, GLM-4-Air, GLM-4 AirX, GLM-4 FlashX, GLM-4-Long, GLM-4-Plus, GLM-4.1V 9B Thinking, GLM-4.5, GLM-4.5-Air, GLM-4.5V, GLM-4.6, GLM-4.6V, GLM-4.7, GLM-4.7-Flash, GLM-Z1 32B, GLM-Z1 9B, GLM-Z1 Rumination 32B, and GLM-Zero (Preview), Cohere: Aya Expanse 32B, Aya Expanse 8B, Command, Command A, Command Light, Command R7B, Command-R, Command-R (Mar), Command-R+ (Apr), and Command-R+, Bytedance: Duobao 1.5 Pro, Seed-Thinking-v1.5, Skylark Lite, and Skylark Pro, AI21 Labs: Jamba 1.5 Large, Jamba 1.5 Large (Feb), Jamba 1.5 Mini, Jamba 1.5 Mini (Feb), Jamba 1.6 Large, Jamba 1.6 Mini, Jamba 1.7 Large, Jamba 1.7 Mini, Jamba Instruct, and Jamba Reasoning 3B, Snowflake: Arctic and Snowflake Llama 3.3 70B, PaddlePaddle: PaddleOCR-VL-0.9B, Alibaba: QwQ-32B, QwQ 32B-Preview, Qwen Chat 14B, Qwen Chat 72B, Qwen Chat 7B, Qwen1.5 Chat 110B, Qwen1.5 Chat 14B, Qwen1.5 Chat 32B, Qwen1.5 Chat 72B, Qwen1.5 Chat 7B, Qwen2 72B, Qwen2 Instruct 7B, Qwen2 Instruct A14B 57B, Qwen2-VL 72B, Qwen2.5 Coder 32B, Qwen2.5 Coder 7B , Qwen2.5 Instruct 14B, Qwen2.5 Instruct 32B, Qwen2.5 72B, Qwen2.5 Instruct 7B, Qwen2.5 Max, Qwen2.5 Max 01-29, Qwen2.5 Omni 7B, Qwen2.5 Plus, Qwen2.5 Turbo, Qwen2.5 VL 72B, Qwen2.5 VL 7B, Qwen3 0.6B, Qwen3 1.7B, Qwen3 14B, Qwen3 235B, Qwen3 235B A22B 2507, Qwen3 235B 2507, Qwen3 30B, Qwen3 30B A3B 2507, Qwen3 32B, Qwen3 4B, Qwen3 4B 2507, Qwen3 8B, Qwen3 Coder 30B A3B, Qwen3 Coder 480B, Qwen3 Max, Qwen3 Max (Preview), Qwen3 Max Thinking, Qwen3 Next 80B A3B, Qwen3 Omni 30B A3B, Qwen3 VL 235B A22B, Qwen3 VL 30B A3B, Qwen3 VL 32B, Qwen3 VL 4B, and Qwen3 VL 8B, InclusionAI: Ling-1T, Ling-flash-2.0, Ling-mini-2.0, Ring-1T, and Ring-flash-2.0, 01.AI: Yi-Large and Yi-Lightning, and ByteDance Seed: Doubao Seed Code, Doubao-Seed-1.8, and Seed-OSS-36B-Instruct.