Comparison and ranking the performance of over 100 AI models (LLMs) across key metrics including intelligence, price, performance and speed (output speed - tokens per second & latency - TTFT), context window & others. For more details including relating to our methodology, see our FAQs.
HIGHLIGHTS
Devstral Small.Latency (seconds):Features | Intelligence | Price | Speed | Latency | ||||
|---|---|---|---|---|---|---|---|---|
Further Analysis | ||||||||
Gemini 3 Pro Preview (high) | 1m | 73 | 13 | $4.50 | 90 | 24.53 | ||
Claude Opus 4.5 | 200k | Anthropic | 70 | 10 | $10.00 | 48 | 2.02 | |
GPT-5.1 (high) | 400k | OpenAI | 70 | 2 | $3.44 | 102 | 34.93 | |
GPT-5 (high) | 400k | OpenAI | 68 | −11 | $3.44 | 78 | 88.89 | |
Kimi K2 Thinking | 256k | ![]() Kimi | 67 | −23 | $1.07 | 97 | 0.59 | |
GPT-5.1 Codex (high) | 400k | OpenAI | 67 | -- | $3.44 | 112 | 22.75 | |
GPT-5 (medium) | 400k | OpenAI | 66 | −14 | $3.44 | 63 | 40.95 | |
DeepSeek V3.2 | 128k | DeepSeek | 66 | −23 | $0.32 | 31 | 1.27 | |
o3 | 200k | OpenAI | 65 | −17 | $3.50 | 113 | 9.22 | |
Grok 4 | 256k | xAI | 65 | 1 | $6.00 | 49 | 48.61 | |
Gemini 3 Pro Preview (low) | 1m | 65 | −1 | $4.50 | 106 | 4.08 | ||
GPT-5 mini (high) | 400k | OpenAI | 64 | −20 | $0.69 | 70 | 76.35 | |
Grok 4.1 Fast | 2m | xAI | 64 | −31 | $0.28 | 156 | 22.35 | |
KAT-Coder-Pro V1 | 256k | KwaiKAT | 64 | −36 | $0.00 | 65 | 0.91 | |
Claude 4.5 Sonnet | 1m | Anthropic | 63 | −2 | $6.00 | 37 | 2.23 | |
Nova 2.0 Pro Preview (medium) | 256k | Amazon | 62 | −50 | $3.44 | 184 | 9.55 | |
GPT-5.1 Codex mini (high) | 400k | OpenAI | 62 | −18 | $0.69 | 142 | 3.21 | |
GPT-5 (low) | 400k | OpenAI | 62 | −13 | $3.44 | 72 | 16.86 | |
MiniMax-M2 | 205k | MiniMax | 61 | −50 | $0.53 | 98 | 1.48 | |
GPT-5 mini (medium) | 400k | OpenAI | 61 | −13 | $0.69 | 70 | 20.08 | |
gpt-oss-120B (high) | 131k | OpenAI | 61 | −52 | $0.26 | 294 | 0.49 | |
Grok 4 Fast | 2m | xAI | 60 | −31 | $0.28 | 144 | 12.70 | |
Claude Opus 4.5 | 200k | Anthropic | 60 | −6 | $10.00 | 76 | 2.09 | |
Gemini 2.5 Pro | 1m | 60 | −18 | $3.44 | 125 | 28.17 | ||
DeepSeek V3.2 Speciale | 128k | DeepSeek | 59 | −19 | $0.32 | 28 | 0.95 | |
Nova 2.0 Lite (medium) | 1m | Amazon | 58 | −58 | $0.85 | 268 | 22.52 | |
DeepSeek V3.1 Terminus | 128k | DeepSeek | 58 | −27 | $0.80 | 0 | 0.00 | |
Nova 2.0 Pro Preview (low) | 256k | Amazon | 58 | −48 | $3.44 | 184 | 5.00 | |
Qwen3 235B A22B 2507 | 256k | Alibaba | 57 | −48 | $2.63 | 86 | 1.38 | |
Doubao Seed Code | 256k | ByteDance Seed | 57 | −36 | $0.41 | 54 | 3.04 | |
Grok 3 mini Reasoning (high) | 1m | xAI | 57 | −7 | $0.35 | 192 | 0.52 | |
Apriel-v1.6-15B-Thinker | 128k | ServiceNow | 57 | −60 | $0.00 | 152 | 0.22 | |
Nova 2.0 Omni (medium) | 1m | Amazon | 56 | −60 | $0.85 | 0 | 0.00 | |
GLM-4.6 | 200k | Z AI | 56 | −44 | $1.00 | 100 | 0.50 | |
Qwen3 Max Thinking | 262k | Alibaba | 56 | −40 | $2.40 | 41 | 1.84 | |
Qwen3 Max | 262k | Alibaba | 55 | −45 | $2.40 | 42 | 2.04 | |
Claude 4.5 Haiku | 200k | Anthropic | 55 | −6 | $2.00 | 107 | 0.56 | |
Gemini 2.5 Flash (Sep) | 1m | 54 | −38 | $0.85 | 210 | 13.12 | ||
Qwen3 VL 235B A22B | 262k | Alibaba | 54 | −47 | $2.63 | 51 | 1.10 | |
Qwen3 Next 80B A3B | 262k | Alibaba | 54 | −53 | $1.88 | 0 | 0.00 | |
ERNIE 5.0 Thinking Preview | 128k | Baidu | 53 | −42 | $1.47 | 16 | 3.51 | |
DeepSeek V3.2 | 128k | DeepSeek | 52 | −49 | $0.32 | 34 | 1.29 | |
gpt-oss-20B (high) | 131k | OpenAI | 52 | −65 | $0.10 | 245 | 0.55 | |
Magistral Medium 1.2 | 128k | ![]() Mistral | 52 | −28 | $2.75 | 47 | 0.43 | |
DeepSeek R1 0528 | 128k | DeepSeek | 52 | −30 | $1.98 | 0 | 0.00 | |
Qwen3 VL 32B | 256k | Alibaba | 52 | −53 | $2.63 | 57 | 1.25 | |
Seed-OSS-36B-Instruct | 512k | ByteDance Seed | 52 | −54 | $0.30 | 33 | 1.97 | |
Apriel-v1.5-15B-Thinker | 128k | ServiceNow | 52 | −56 | $0.00 | 152 | 0.18 | |
GPT-5 nano (high) | 400k | OpenAI | 51 | −30 | $0.14 | 123 | 70.92 | |
Kimi K2 0905 | 256k | ![]() Kimi | 50 | −28 | $1.20 | 90 | 0.53 | |
Claude 4.5 Sonnet | 1m | Anthropic | 50 | −11 | $6.00 | 73 | 2.13 | |
GPT-5 nano (medium) | 400k | OpenAI | 49 | −27 | $0.14 | 103 | 31.93 | |
GLM-4.5-Air | 128k | Z AI | 49 | −63 | $0.42 | 116 | 0.34 | |
Nova 2.0 Omni (low) | 1m | Amazon | 49 | −51 | $0.85 | 0 | 0.00 | |
Grok Code Fast 1 | 256k | xAI | 49 | −38 | $0.53 | 130 | 4.14 | |
Gemini 2.5 Flash-Lite (Sep) | 1m | 48 | −55 | $0.17 | 484 | 6.63 | ||
gpt-oss-120B (low) | 131k | OpenAI | 48 | −56 | $0.26 | 305 | 0.52 | |
Nova 2.0 Lite (low) | 1m | Amazon | 47 | −55 | $0.85 | 276 | 8.45 | |
Gemini 2.5 Flash (Sep) | 1m | 47 | −41 | $0.85 | 207 | 0.37 | ||
Qwen3 30B A3B 2507 | 262k | Alibaba | 46 | −57 | $0.75 | 164 | 1.29 | |
DeepSeek V3.1 Terminus | 128k | DeepSeek | 46 | −45 | $0.80 | 0 | 0.00 | |
Qwen3 235B 2507 | 256k | Alibaba | 45 | −45 | $1.23 | 83 | 1.20 | |
Qwen3 VL 30B A3B | 256k | Alibaba | 45 | −59 | $0.75 | 120 | 1.06 | |
Llama Nemotron Super 49B v1.5 | 128k | NVIDIA | 45 | −47 | $0.17 | 81 | 0.29 | |
Motif-2-12.7B | 128k | Motif Technologies | 45 | −62 | $0.00 | 0 | 0.00 | |
Qwen3 Next 80B A3B | 262k | Alibaba | 45 | −60 | $0.88 | 232 | 1.21 | |
Ling-1T | 128k | ![]() InclusionAI | 45 | -- | $1.00 | 0 | 0.00 | |
GLM-4.6 | 200k | Z AI | 45 | −33 | $1.00 | 52 | 0.63 | |
gpt-oss-20B (low) | 131k | OpenAI | 44 | −61 | $0.10 | 271 | 0.56 | |
Qwen3 VL 235B A22B | 262k | Alibaba | 44 | −54 | $1.23 | 45 | 1.20 | |
GPT-5 (minimal) | 400k | OpenAI | 43 | −37 | $3.44 | 62 | 0.85 | |
Qwen3 4B 2507 | 262k | Alibaba | 43 | -- | $0.00 | 0 | 0.00 | |
Magistral Small 1.2 | 128k | ![]() Mistral | 43 | −66 | $0.75 | 95 | 0.37 | |
GPT-5.1 | 400k | OpenAI | 43 | −37 | $3.44 | 87 | 0.93 | |
EXAONE 4.0 32B | 131k | ![]() LG AI Research | 43 | −61 | $0.70 | 164 | 0.37 | |
Qwen3 Coder 480B | 262k | Alibaba | 42 | -- | $3.00 | 89 | 1.74 | |
Nova 2.0 Pro Preview | 256k | Amazon | 42 | −50 | $3.44 | 210 | 0.48 | |
Ring-1T | 128k | ![]() InclusionAI | 42 | -- | $0.99 | 34 | 1.73 | |
GPT-5 (ChatGPT) | 128k | OpenAI | 42 | -- | $3.44 | 128 | 0.77 | |
Claude 4.5 Haiku | 200k | Anthropic | 42 | −8 | $2.00 | 164 | 0.85 | |
Gemini 2.5 Flash-Lite (Sep) | 1m | 42 | −44 | $0.17 | 440 | 0.26 | ||
GPT-5 mini (minimal) | 400k | OpenAI | 42 | -- | $0.69 | 76 | 1.03 | |
Hermes 4 405B | 128k | ![]() Nous Research | 42 | −37 | $1.50 | 37 | 0.70 | |
Qwen3 VL 32B | 256k | Alibaba | 41 | −64 | $1.23 | 57 | 1.23 | |
NVIDIA Nemotron Nano 12B v2 VL | 128k | NVIDIA | 41 | −66 | $0.30 | 130 | 0.25 | |
Qwen3 Omni 30B A3B | 66k | Alibaba | 40 | −62 | $0.43 | 109 | 1.08 | |
Ring-flash-2.0 | 128k | ![]() InclusionAI | 40 | -- | $0.25 | 41 | 2.11 | |
Hermes 4 70B | 128k | ![]() Nous Research | 39 | −51 | $0.20 | 91 | 0.62 | |
Grok 4 Fast | 2m | xAI | 39 | -- | $0.28 | 158 | 0.52 | |
Llama Nemotron Ultra | 128k | NVIDIA | 38 | −46 | $0.90 | 40 | 0.68 | |
Qwen3 VL 30B A3B | 256k | Alibaba | 38 | −64 | $0.35 | 120 | 1.03 | |
Mistral Large 3 | 256k | ![]() Mistral | 38 | −41 | $0.75 | 59 | 0.65 | |
Ling-flash-2.0 | 128k | ![]() InclusionAI | 38 | −67 | $0.25 | 63 | 1.76 | |
Grok 4.1 Fast | 2m | xAI | 38 | −52 | $0.28 | 127 | 0.58 | |
Solar Pro 2 | 66k | Upstage | 38 | −58 | $0.50 | 127 | 1.10 | |
NVIDIA Nemotron Nano 9B V2 | 131k | NVIDIA | 37 | −43 | $0.07 | 98 | 0.27 | |
GLM-4.5V | 64k | Z AI | 37 | −46 | $0.90 | 36 | 1.04 | |
Qwen3 30B A3B 2507 | 262k | Alibaba | 37 | −67 | $0.35 | 97 | 1.17 | |
Devstral 2 | 256k | ![]() Mistral | 36 | −48 | $0.00 | 66 | 0.39 | |
OLMo 3 32B Think | 66k | Allen Institute for AI | 36 | -- | $0.24 | 26 | 0.28 | |
NVIDIA Nemotron Nano 9B V2 | 131k | NVIDIA | 36 | -- | $0.10 | 97 | 0.28 | |
Llama 4 Maverick | 1m | Meta | 36 | −43 | $0.42 | 127 | 0.46 | |
Nova 2.0 Lite | 1m | Amazon | 36 | −60 | $0.85 | 288 | 0.47 | |
Llama 3.3 Nemotron Super 49B | 128k | NVIDIA | 35E | -- | $0.00 | 0 | 0.00 | |
Mistral Medium 3.1 | 128k | ![]() Mistral | 35 | −48 | $0.80 | 103 | 0.40 | |
Nova 2.0 Omni | 1m | Amazon | 34 | -- | $0.85 | 265 | 0.63 | |
Qwen3 Coder 30B A3B | 262k | Alibaba | 33 | -- | $0.90 | 124 | 1.75 | |
ERNIE 4.5 300B A47B | 131k | Baidu | 33 | −37 | $0.48 | 30 | 0.73 | |
Hermes 4 405B | 128k | ![]() Nous Research | 33 | −35 | $1.50 | 38 | 0.71 | |
Nova Premier | 1m | Amazon | 32 | −38 | $5.00 | 93 | 0.77 | |
Qwen3 VL 8B | 256k | Alibaba | 32 | −54 | $0.66 | 70 | 1.06 | |
OLMo 3 7B Think | 66k | Allen Institute for AI | 32 | −74 | $0.14 | 117 | 0.45 | |
Devstral Small 2 | 256k | ![]() Mistral | 32 | −59 | $0.00 | 238 | 0.37 | |
DeepSeek R1 0528 Qwen3 8B | 33k | DeepSeek | 31 | −65 | $0.07 | 73 | 0.79 | |
Ministral 14B (Dec '25) | 256k | ![]() Mistral | 31 | −67 | $0.20 | 151 | 0.28 | |
Qwen3 4B 2507 | 262k | Alibaba | 30 | -- | $0.00 | 0 | 0.00 | |
EXAONE 4.0 32B | 131k | ![]() LG AI Research | 30 | −66 | $0.70 | 162 | 0.35 | |
Solar Pro 2 | 66k | Upstage | 30 | -- | $0.50 | 117 | 1.23 | |
Qwen3 Omni 30B A3B | 66k | Alibaba | 30 | -- | $0.43 | 110 | 1.07 | |
DeepSeek R1 Distill Llama 70B | 128k | DeepSeek | 30 | −47 | $0.88 | 104 | 0.88 | |
GPT-5 nano (minimal) | 400k | OpenAI | 29 | −66 | $0.14 | 121 | 0.63 | |
Mistral Small 3.2 | 128k | ![]() Mistral | 29 | −51 | $0.15 | 134 | 0.42 | |
Ministral 8B (Dec '25) | 256k | ![]() Mistral | 28 | −70 | $0.15 | 194 | 0.28 | |
Llama 4 Scout | 10m | Meta | 28 | −53 | $0.24 | 127 | 0.51 | |
Llama 3.1 405B | 128k | Meta | 28 | −18 | $4.19 | 31 | 0.81 | |
Llama 3.3 70B | 128k | Meta | 28 | -- | $0.62 | 152 | 0.49 | |
Devstral Medium | 256k | ![]() Mistral | 28 | −33 | $0.80 | 175 | 0.43 | |
Ling-mini-2.0 | 131k | ![]() InclusionAI | 28 | -- | $0.12 | 178 | 1.89 | |
Qwen3 VL 4B | 256k | Alibaba | 27 | -- | $0.00 | 0 | 0.00 | |
Devstral Small | 256k | ![]() Mistral | 27 | −52 | $0.15 | 375 | 0.36 | |
Qwen3 VL 8B | 256k | Alibaba | 27 | −54 | $0.31 | 119 | 1.04 | |
Command A | 256k | Cohere | 27 | −50 | $4.38 | 74 | 0.24 | |
Exaone 4.0 1.2B | 64k | ![]() LG AI Research | 27 | -- | $0.00 | 0 | 0.00 | |
Llama Nemotron Super 49B v1.5 | 128k | NVIDIA | 27 | -- | $0.17 | 81 | 0.24 | |
Llama 3.1 Nemotron Nano 4B v1.1 | 128k | NVIDIA | 26E | -- | $0.00 | 0 | 0.00 | |
Kimi Linear 48B A3B Instruct | 1m | ![]() Kimi | 26 | -- | $0.38 | 80 | 0.44 | |
GLM-4.5V | 64k | Z AI | 26 | -- | $0.90 | 34 | 0.90 | |
Reka Flash 3 | 128k | Reka AI | 26 | -- | $0.35 | 56 | 1.37 | |
Llama 3.3 Nemotron Super 49B | 128k | NVIDIA | 26E | -- | $0.00 | 0 | 0.00 | |
NVIDIA Nemotron Nano 12B v2 VL | 128k | NVIDIA | 25 | -- | $0.30 | 134 | 0.59 | |
Qwen3 VL 4B | 256k | Alibaba | 25 | -- | $0.00 | 0 | 0.00 | |
Hermes 4 70B | 128k | ![]() Nous Research | 24 | -- | $0.20 | 95 | 0.58 | |
Llama 3.1 Nemotron 70B | 128k | NVIDIA | 24 | -- | $0.60 | 48 | 0.35 | |
Granite 4.0 H Small | 128k | IBM | 23 | −62 | $0.11 | 136 | 8.75 | |
Phi-4 | 16k | Microsoft Azure | 23 | -- | $0.22 | 24 | 0.46 | |
OLMo 3 7B | 66k | Allen Institute for AI | 22 | -- | $0.13 | 46 | 0.61 | |
Gemma 3 27B | 128k | 22 | -- | $0.00 | 45 | 0.69 | ||
Ministral 3B (Dec '25) | 256k | ![]() Mistral | 22 | −64 | $0.10 | 303 | 0.32 | |
Jamba Reasoning 3B | 262k | AI21 Labs | 21 | -- | $0.00 | 0 | 0.00 | |
Jamba 1.7 Large | 256k | AI21 Labs | 21 | -- | $3.50 | 50 | 0.78 | |
Exaone 4.0 1.2B | 64k | ![]() LG AI Research | 20 | -- | $0.00 | 0 | 0.00 | |
Gemma 3 12B | 128k | 20 | -- | $0.00 | 51 | 4.80 | ||
R1 1776 | 128k | ![]() Perplexity | 19E | -- | $0.00 | 0 | 0.00 | |
Llama 3.2 90B (Vision) | 128k | Meta | 19E | -- | $0.72 | 44 | 0.35 | |
Nova Micro | 130k | Amazon | 18 | -- | $0.06 | 343 | 0.32 | |
LFM2 8B A1B | 33k | Liquid AI | 17 | -- | $0.00 | 0 | 0.00 | |
Granite 4.0 Micro | 128k | IBM | 16 | -- | $0.00 | 0 | 0.00 | |
Phi-4 Mini | 128k | Microsoft Azure | 16 | -- | $0.00 | 44 | 0.34 | |
DeepHermes 3 - Mistral 24B | 32k | ![]() Nous Research | 16E | -- | $0.00 | 0 | 0.00 | |
Llama 3.2 11B (Vision) | 128k | Meta | 16 | -- | $0.16 | 65 | 0.34 | |
Gemma 3n E4B | 32k | 15 | -- | $0.03 | 38 | 0.36 | ||
Jamba 1.7 Mini | 258k | AI21 Labs | 15 | -- | $0.25 | 146 | 0.55 | |
Gemma 3 4B | 128k | 15 | -- | $0.00 | 48 | 0.91 | ||
Granite 4.0 H 1B | 128k | IBM | 14 | -- | $0.00 | 0 | 0.00 | |
Granite 4.0 1B | 128k | IBM | 13 | -- | $0.00 | 0 | 0.00 | |
Phi-4 Multimodal | 128k | Microsoft Azure | 12E | -- | $0.00 | 17 | 0.36 | |
LFM2 2.6B | 33k | Liquid AI | 12 | -- | $0.00 | 0 | 0.00 | |
Gemma 3n E2B | 32k | 11 | -- | $0.00 | 49 | 0.34 | ||
LFM2 1.2B | 33k | Liquid AI | 10 | -- | $0.00 | 0 | 0.00 | |
Molmo 7B-D | 4k | Allen Institute for AI | 9 | -- | $0.00 | 0 | 0.00 | |
Granite 4.0 H 350M | 33k | IBM | 8 | -- | $0.00 | 0 | 0.00 | |
Granite 4.0 350M | 33k | IBM | 8 | -- | $0.00 | 0 | 0.00 | |
Gemma 3 1B | 32k | 7 | -- | $0.00 | 54 | 0.49 | ||
Gemma 3 270M | 32k | 6 | -- | $0.00 | 0 | 0.00 | ||
DeepHermes 3 - Llama-3.1 8B | 128k | ![]() Nous Research | 2E | -- | $0.00 | 0 | 0.00 | |
GPT-5.2 (xhigh) | 400k | OpenAI | -- | -- | $4.81 | 67 | 48.98 | |
DeepSeek-OCR | 8k | DeepSeek | -- | -- | $0.05 | 359 | 0.20 | |
Cogito v2.1 | 128k | ![]() Deep Cogito | -- | -- | $1.25 | 80 | 0.34 | |
GLM-4.6V | 128k | Z AI | -- | -- | $0.45 | 30 | 3.93 | |
GLM-4.6V | 128k | Z AI | -- | -- | $0.45 | 46 | 95.92 | |
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Price per token generated by the model (received from the API), represented as USD per million Tokens.
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Metrics are 'live' and are based on the past 72 hours of measurements, measurements are taken 8 times a day for single requests and 2 times per day for parallel requests.
Models compared: OpenAI: GPT 4o Audio, GPT 4o Realtime, GPT 4o Speech Pipeline, GPT Realtime, GPT Realtime Mini (Oct '25), GPT-3.5 Turbo, GPT-3.5 Turbo (0125), GPT-3.5 Turbo (0301), GPT-3.5 Turbo (0613), GPT-3.5 Turbo (1106), GPT-3.5 Turbo Instruct, GPT-4, GPT-4 Turbo, GPT-4 Turbo (0125), GPT-4 Turbo (1106), GPT-4 Vision, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano, GPT-4.5 (Preview), GPT-4o (Apr), GPT-4o (Aug), GPT-4o (ChatGPT), GPT-4o (Mar), GPT-4o (May), GPT-4o (Nov), GPT-4o Realtime (Dec), GPT-4o mini, GPT-4o mini Realtime (Dec), GPT-5 (ChatGPT), GPT-5 (high), GPT-5 (low), GPT-5 (medium), GPT-5 (minimal), GPT-5 Codex (high), GPT-5 Pro (high), GPT-5 mini (high), GPT-5 mini (medium), GPT-5 mini (minimal), GPT-5 nano (high), GPT-5 nano (medium), GPT-5 nano (minimal), GPT-5.1, GPT-5.1 (high), GPT-5.1 Codex (high), GPT-5.1 Codex mini (high), GPT-5.2, GPT-5.2 (xhigh), gpt-oss-120B (high), gpt-oss-120B (low), gpt-oss-20B (high), gpt-oss-20B (low), o1, o1-mini, o1-preview, o1-pro, o3, o3-mini, o3-mini (high), o3-pro, and o4-mini (high), Meta: Code Llama 70B, Llama 2 Chat 13B, Llama 2 Chat 70B, Llama 2 Chat 7B, Llama 3 70B, Llama 3 8B, Llama 3.1 405B, Llama 3.1 70B, Llama 3.1 8B, Llama 3.2 11B (Vision), Llama 3.2 1B, Llama 3.2 3B, Llama 3.2 90B (Vision), Llama 3.3 70B, Llama 4 Behemoth, Llama 4 Maverick, Llama 4 Scout, and Llama 65B, Google: Gemini 1.0 Pro, Gemini 1.0 Ultra, Gemini 1.5 Flash (May), Gemini 1.5 Flash (Sep), Gemini 1.5 Flash-8B, Gemini 1.5 Pro (May), Gemini 1.5 Pro (Sep), Gemini 2.0 Flash, Gemini 2.0 Flash (exp), Gemini 2.0 Flash Thinking exp. (Dec), Gemini 2.0 Flash Thinking exp. (Jan), Gemini 2.0 Flash-Lite (Feb), Gemini 2.0 Flash-Lite (Preview), Gemini 2.0 Pro Experimental, Gemini 2.5 Flash, Gemini 2.5 Flash Live Preview, Gemini 2.5 Flash Native Audio, Gemini 2.5 Flash Native Audio Dialog, Gemini 2.5 Flash (Sep), Gemini 2.5 Flash-Lite, Gemini 2.5 Flash-Lite (Sep), Gemini 2.5 Pro, Gemini 2.5 Pro (Mar), Gemini 2.5 Pro (May), Gemini 3 Pro Preview (high), Gemini 3 Pro Preview (low), Gemini Experimental (Nov), Gemma 2 27B, Gemma 2 2B, Gemma 2 9B, Gemma 3 12B, Gemma 3 1B, Gemma 3 270M, Gemma 3 27B, Gemma 3 4B, Gemma 3n E2B, Gemma 3n E4B, Gemma 3n E4B (May), Gemma 7B, PALM-2, and Whisperwind, Anthropic: Claude 2.0, Claude 2.1, Claude 3 Haiku, Claude 3 Opus, Claude 3 Sonnet, Claude 3.5 Haiku, Claude 3.5 Sonnet (June), Claude 3.5 Sonnet (Oct), Claude 3.7 Sonnet, Claude 4 Opus, Claude 4 Sonnet, Claude 4.1 Opus, Claude 4.5 Haiku, Claude 4.5 Sonnet, Claude Instant, and Claude Opus 4.5, Mistral: Codestral (Jan), Codestral (May), Codestral-Mamba, Devstral 2, Devstral Medium, Devstral Small, Devstral Small (May), Devstral Small 2, Magistral Medium 1, Magistral Medium 1.1, Magistral Medium 1.2, Magistral Small 1, Magistral Small 1.1, Magistral Small 1.2, Ministral 14B (Dec '25), Ministral 3B, Ministral 3B (Dec '25), Ministral 8B, Ministral 8B (Dec '25), Mistral 7B, Mistral Large (Feb), Mistral Large 2 (Jul), Mistral Large 2 (Nov), Mistral Large 3, Mistral Medium, Mistral Medium 3, Mistral Medium 3.1, Mistral NeMo, Mistral Saba, Mistral Small (Feb), Mistral Small (Sep), Mistral Small 3, Mistral Small 3.1, Mistral Small 3.2, Mixtral 8x22B, Mixtral 8x7B, Pixtral 12B, and Pixtral Large, DeepSeek: DeepSeek Coder V2 Lite, DeepSeek LLM 67B (V1), DeepSeek Prover V2 671B, DeepSeek R1 (FP4), DeepSeek R1 (Jan), DeepSeek R1 0528, DeepSeek R1 0528 Qwen3 8B, DeepSeek R1 Distill Llama 70B, DeepSeek R1 Distill Llama 8B, DeepSeek R1 Distill Qwen 1.5B, DeepSeek R1 Distill Qwen 14B, DeepSeek R1 Distill Qwen 32B, DeepSeek V3 (Dec), DeepSeek V3 0324, DeepSeek V3.1, DeepSeek V3.1 Terminus, DeepSeek V3.2, DeepSeek V3.2 Exp, DeepSeek V3.2 Speciale, DeepSeek-Coder-V2, DeepSeek-OCR, DeepSeek-V2, DeepSeek-V2.5, DeepSeek-V2.5 (Dec), DeepSeek-VL2, and Janus Pro 7B, Perplexity: PPLX-70B Online, PPLX-7B-Online, R1 1776, Sonar, Sonar 3.1 Huge, Sonar 3.1 Large, Sonar 3.1 Small , Sonar Large, Sonar Pro, Sonar Reasoning, Sonar Reasoning Pro, and Sonar Small, xAI: Grok 2, Grok 3, Grok 3 Reasoning Beta, Grok 3 mini, Grok 3 mini Reasoning (low), Grok 3 mini Reasoning (high), Grok 4, Grok 4 Fast, Grok 4 Fast 1111 (Reasoning), Grok 4 mini (0908), Grok 4.1 Fast, Grok 4.1 Fast v4, Grok Beta, Grok Code Fast 1, Grok Voice, Grok-1, and test model, OpenChat: OpenChat 3.5, Amazon: Nova 2.0 Lite, Nova 2.0 Lite (high), Nova 2.0 Lite (low), Nova 2.0 Lite (medium), Nova 2.0 Omni, Nova 2.0 Omni (high), Nova 2.0 Omni (low), Nova 2.0 Omni (medium), Nova 2.0 Pro Preview, Nova 2.0 Pro Preview (high), Nova 2.0 Pro Preview (low), Nova 2.0 Pro Preview (medium), Nova 2.0 Realtime, Nova 2.0 Sonic, Nova Lite, Nova Micro, Nova Premier, and Nova Pro, Microsoft Azure: Phi-3 Medium 14B, Phi-3 Mini, Phi-4, Phi-4 Mini, Phi-4 Multimodal, Phi-4 mini reasoning, Phi-4 reasoning, Phi-4 reasoning plus, Yosemite-1-1, Yosemite-1-1-d36, Yosemite 1.1 d36 Updated, Yosemite-1-1-d64, Yosemite 1.1 d64 Updated, and Yosemite, Liquid AI: LFM 1.3B, LFM 3B, LFM 40B, LFM2 1.2B, LFM2 2.6B, and LFM2 8B A1B, Upstage: Solar Mini, Solar Pro, Solar Pro (Nov), Solar Pro 2, and Solar Pro 2 , Databricks: DBRX, MiniMax: MiniMax M1 40k, MiniMax M1 80k, MiniMax-M2, and MiniMax-Text-01, NVIDIA: Cosmos Nemotron 34B, Llama 3.1 Nemotron 70B, Llama 3.1 Nemotron Nano 4B v1.1, Llama 3.1 Nemotron Nano 8B, Llama 3.3 Nemotron Nano 8B, Llama Nemotron Ultra, Llama 3.3 Nemotron Super 49B, Llama Nemotron Super 49B v1.5, Nemotron 3 Nano (30B A3B), NVIDIA Nemotron 3 Nano, NVIDIA Nemotron Nano 12B v2 VL, and NVIDIA Nemotron Nano 9B V2, StepFun: Step-2, Step-2-Mini, Step3, step-1-128k, step-1-256k, step-1-32k, step-1-8k, step-1-flash, step-2-16k-exp, and step-r1-v-mini, IBM: Granite 3.0 2B, Granite 3.0 8B, Granite 3.3 8B, Granite 4.0 1B, Granite 4.0 350M, Granite 4.0 8B, Granite 4.0 H 1B, Granite 4.0 H 350M, Granite 4.0 H Small, Granite 4.0 Micro, Granite 4.0 Tiny, and Granite Vision 3.3 2B, Inceptionlabs: Mercury, Mercury Coder Mini, Mercury Coder Small, and Mercury Instruct, Reka AI: Reka Core, Reka Edge, Reka Flash (Feb), Reka Flash, Reka Flash 3, and Reka Flash 3.1, LG AI Research: EXAONE 4.0 32B, EXAONE Deep 32B, and Exaone 4.0 1.2B, Xiaomi: MiMo 7B RL and Mimo-v2-flash-1207-sft, Baidu: ERNIE 4.5, ERNIE 4.5 0.3B, ERNIE 4.5 21B A3B, ERNIE 4.5 300B A47B, ERNIE 4.5 VL 28B A3B, ERNIE 4.5 VL 424B A47B, ERNIE 5.0 Thinking Preview, and ERNIE X1, Baichuan: Baichuan 4 and Baichuan M1 (Preview), vercel: v0-1.0-md, Apple: Apple On-Device and FastVLM, Other: LLaVA-v1.5-7B, Tencent: Hunyuan A13B, Hunyuan 80B A13B, Hunyuan T1, and Hunyuan-TurboS, Prime Intellect: INTELLECT-3, Motif Technologies: Motif-2-12.7B, Korea Telecom: midm-250-pro-rsnsft, Z AI: GLM-4 32B, GLM-4 9B, GLM-4-Air, GLM-4 AirX, GLM-4 FlashX, GLM-4-Long, GLM-4-Plus, GLM-4.1V 9B Thinking, GLM-4.5, GLM-4.5-Air, GLM-4.5V, GLM-4.6, GLM-4.6V, GLM-Z1 32B, GLM-Z1 9B, GLM-Z1 Rumination 32B, and GLM-Zero (Preview), Cohere: Aya Expanse 32B, Aya Expanse 8B, Command, Command A, Command Light, Command R7B, Command-R, Command-R (Mar), Command-R+ (Apr), and Command-R+, Bytedance: Duobao 1.5 Pro, Seed-Thinking-v1.5, Skylark Lite, and Skylark Pro, AI21 Labs: Jamba 1.5 Large, Jamba 1.5 Large (Feb), Jamba 1.5 Mini, Jamba 1.5 Mini (Feb), Jamba 1.6 Large, Jamba 1.6 Mini, Jamba 1.7 Large, Jamba 1.7 Mini, Jamba Instruct, and Jamba Reasoning 3B, Snowflake: Arctic and Snowflake Llama 3.3 70B, PaddlePaddle: PaddleOCR-VL-0.9B, Alibaba: QwQ-32B, QwQ 32B-Preview, Qwen Chat 14B, Qwen Chat 72B, Qwen Chat 7B, Qwen1.5 Chat 110B, Qwen1.5 Chat 14B, Qwen1.5 Chat 32B, Qwen1.5 Chat 72B, Qwen1.5 Chat 7B, Qwen2 72B, Qwen2 Instruct 7B, Qwen2 Instruct A14B 57B, Qwen2-VL 72B, Qwen2.5 Coder 32B, Qwen2.5 Coder 7B , Qwen2.5 Instruct 14B, Qwen2.5 Instruct 32B, Qwen2.5 72B, Qwen2.5 Instruct 7B, Qwen2.5 Max, Qwen2.5 Max 01-29, Qwen2.5 Omni 7B, Qwen2.5 Plus, Qwen2.5 Turbo, Qwen2.5 VL 72B, Qwen2.5 VL 7B, Qwen3 0.6B, Qwen3 1.7B, Qwen3 14B, Qwen3 235B, Qwen3 235B A22B 2507, Qwen3 235B 2507, Qwen3 30B, Qwen3 30B A3B 2507, Qwen3 32B, Qwen3 4B, Qwen3 4B 2507, Qwen3 8B, Qwen3 Coder 30B A3B, Qwen3 Coder 480B, Qwen3 Max, Qwen3 Max (Preview), Qwen3 Max Thinking, Qwen3 Next 80B A3B, Qwen3 Omni 30B A3B, Qwen3 VL 235B A22B, Qwen3 VL 30B A3B, Qwen3 VL 32B, Qwen3 VL 4B, and Qwen3 VL 8B, InclusionAI: Ling-1T, Ling-flash-2.0, Ling-mini-2.0, Ring-1T, and Ring-flash-2.0, 01.AI: Yi-Large and Yi-Lightning, and ByteDance Seed: Doubao Seed Code and Seed-OSS-36B-Instruct.