Comparisons of Small Open Source AI Models (4B-40B)
Open source AI models with between 4B to 40B parameters. Models are considered Open Source (also commonly referred to as open weights) where their weights are accessible to download. This allows self-hosting on your own infrastructure and enables customizing the model such as through fine-tuning. Click on any model to see detailed metrics. For more details including relating to our methodology, see our FAQs.
Qwen3.5 27B and
Qwen3.5 27B are the highest intelligence Small open source models, defined as those with 4B-40B parameters, followed by
Qwen3.5 35B A3B &
Qwen3.5 9B.
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Total Parameters
Trainable parameters in billions
Navigation
Openness
Artificial Analysis Openness Index: Results
Openness Index assesses model openness on a 0 to 100 normalized scale (higher is more open)
Intelligence
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt
Reasoning models are indicated by a lightbulb icon.
Intelligence Evaluations
Intelligence evaluations measured independently by Artificial Analysis; Higher is better
Results claimed by AI Lab (not yet independently verified)
GDPval-AA
Terminal-Bench Hard
𝜏²-Bench Telecom
AA-LCR
AA-Omniscience Accuracy
AA-Omniscience Non-Hallucination Rate
Humanity's Last Exam
GPQA Diamond
SciCode
IFBench
CritPt
MMMU-Pro
Reasoning models are indicated by a lightbulb icon.
Size
Model Size: Total and Active Parameters
Comparison between total model parameters and parameters active during inference
Active Parameters
Passive Parameters
Reasoning models are indicated by a lightbulb icon.
Intelligence vs. Active Parameters
Active Parameters at Inference Time; Artificial Analysis Intelligence Index
Most attractive quadrant
Alibaba
Google
NVIDIA
OpenAI
ServiceNow
Reasoning models are indicated by a lightbulb icon.
Intelligence vs. Total Parameters
Artificial Analysis Intelligence Index; Size in Parameters (Billions)
Most attractive quadrant
Alibaba
Google
NVIDIA
OpenAI
ServiceNow
Reasoning models are indicated by a lightbulb icon.
Context Window
Context Window
Context Window: Tokens Limit; Higher is better
Reasoning models are indicated by a lightbulb icon.
Further details
| Weights | Provider Benchmarks | |||||||
|---|---|---|---|---|---|---|---|---|
Qwen3.5 27B (Reasoning) Alibaba | 42 | 27.8B | 262k | $0.8 | 82 | 🤗 | +1 more | View |
Qwen3.5 27B (Non-reasoning) Alibaba | 37 | 27.8B | 262k | $0.8 | 86 | 🤗 | View | |
Qwen3.5 35B A3B (Reasoning) Alibaba | 37 | 36B (3B active at inference time) | 262k | $0.7 | 188 | 🤗 | +1 more | View |
Qwen3.5 9B (Reasoning) Alibaba | 32 | 9.65B | 262k | $0.1 | 170 | 🤗 | View | |
Qwen3.5 35B A3B (Non-reasoning) Alibaba | 31 | 36B (3B active at inference time) | 262k | $0.7 | 190 | 🤗 | View | |
Nemotron Cascade 2 30B A3B NVIDIA | 28 | 31.6B (3B active at inference time) | 262k | - | - | 🤗 | - | View |
Apriel-v1.6-15B-Thinker ServiceNow | 28 | 15B | 128k | - | 81 | 🤗 | View | |
Qwen3.5 9B (Non-reasoning) Alibaba | 27 | 9.65B | 262k | $0.1 | 186 | 🤗 | View | |
Qwen3.5 4B (Reasoning) Alibaba | 27 | 4.66B | 262k | $0.1 | 234 | 🤗 | View | |
gpt-oss-20B (high) OpenAI | 24 | 21B (3.6B active at inference time) | 131k | $0.1 | 280 | 🤗 | +9 more | View |
NVIDIA Nemotron 3 Nano 30B A3B (Reasoning) NVIDIA | 24 | 31.6B (3.6B active at inference time) | 1.00M | $0.1 | 142 | 🤗 | View | |
HyperCLOVA X SEED Think (32B) Naver | 24 | 32B | 128k | - | - | 🤗 | - | View |
Qwen3.5 4B (Non-reasoning) Alibaba | 23 | 4.66B | 262k | $0.1 | 235 | 🤗 | View | |
gpt-oss-20B (low) OpenAI | 21 | 21B (3.6B active at inference time) | 131k | $0.1 | 237 | 🤗 | +9 more | View |
Tri-21B-think Preview Trillion Labs | 20 | 21B | 32.0k | - | - | 🤗 | - | View |
Devstral Small 2 Mistral | 19 | 24B | 256k | - | 75 | 🤗 | View | |
Tri-21B-Think Trillion Labs | 19 | 21B | 32.0k | - | - | 🤗 | - | View |
Magistral Small 1.2 Mistral | 18 | 24B | 128k | $0.8 | 151 | 🤗 | View | |
EXAONE 4.0 32B (Reasoning) LG AI Research | 17 | 32B | 131k | - | - | 🤗 | - | View |
DeepSeek R1 0528 Qwen3 8B DeepSeek | 16 | 8.19B | 32.8k | - | - | 🤗 | - | View |
Ministral 3 14B Mistral | 16 | 14B | 256k | $0.2 | 116 | 🤗 | View | |
Falcon-H1R-7B TII UAE | 16 | 7B | 256k | - | - | 🤗 | - | View |
Qwen3 Omni 30B A3B (Reasoning) Alibaba | 16 | 35.3B (3B active at inference time) | 65.5k | $0.4 | 89 | 🤗 | View | |
Step3 VL 10B StepFun | 15 | 10.2B | 65.5k | - | - | 🤗 | - | View |
NVIDIA Nemotron Nano 12B v2 VL (Reasoning) NVIDIA | 15 | 13.2B | 128k | $0.3 | 130 | 🤗 | View | |
Ministral 3 8B Mistral | 15 | 8B | 256k | $0.1 | 175 | 🤗 | View | |
NVIDIA Nemotron Nano 9B V2 (Reasoning) NVIDIA | 15 | 9B | 131k | $0.1 | 154 | 🤗 | View | |
Llama 3.1 Nemotron Nano 4B v1.1 (Reasoning) NVIDIA | 14 | 4.51B | 128k | - | - | 🤗 | - | View |
Olmo 3.1 32B Think Allen Institute for AI | 14 | 32.2B | 65.5k | - | 90 | 🤗 | View | |
NVIDIA Nemotron 3 Nano 30B A3B (Non-reasoning) NVIDIA | 13 | 31.6B (3.6B active at inference time) | 1.00M | $0.1 | 95 | 🤗 | View | |
NVIDIA Nemotron Nano 9B V2 (Non-reasoning) NVIDIA | 13 | 9B | 131k | $0.1 | 166 | 🤗 | View | |
Sarvam 30B (high) Sarvam | 12 | 32.2B (2.4B active at inference time) | 65.5k | - | 155 | 🤗 | View | |
Olmo 3.1 32B Instruct Allen Institute for AI | 12 | 32.2B | 65.5k | $0.3 | 54 | 🤗 | View | |
EXAONE 4.0 32B (Non-reasoning) LG AI Research | 12 | 32B | 131k | - | - | 🤗 | - | View |
DeepHermes 3 - Mistral 24B Preview (Non-reasoning) Nous Research | 11 | 24B | 32.0k | - | - | 🤗 | - | View |
Granite 4.0 H Small IBM | 11 | 32B (9B active at inference time) | 128k | $0.1 | 421 | 🤗 | View | |
Qwen3 Omni 30B A3B Instruct Alibaba | 11 | 35.3B (3B active at inference time) | 65.5k | $0.4 | 91 | 🤗 | View | |
LFM2 24B A2B Liquid AI | 10 | 23.8B (2.3B active at inference time) | 32.8k | $0.1 | 241 | 🤗 | View | |
Phi-4 Microsoft Azure | 10 | 14B | 16.0k | $0.2 | 33 | 🤗 | View | |
Gemma 3 27B Instruct Google | 10 | 27.4B | 128k | - | 28 | 🤗 | +3 more | View |
NVIDIA Nemotron Nano 12B v2 VL (Non-reasoning) NVIDIA | 10 | 13.2B | 128k | $0.3 | 138 | 🤗 | View | |
Phi-4 Multimodal Instruct Microsoft Azure | 10 | 5.6B | 128k | - | 17 | 🤗 | View | |
Reka Flash 3 Reka AI | 10 | 21B | 128k | $0.3 | - | 🤗 | View | |
Olmo 3 7B Think Allen Institute for AI | 9 | 7B | 65.5k | - | - | 🤗 | - | View |
Molmo 7B-D Allen Institute for AI | 9 | 8.02B | 4.10k | - | - | 🤗 | - | View |
Ling-mini-2.0 InclusionAI | 9 | 16.3B (1.4B active at inference time) | 131k | - | - | 🤗 | - | View |
Gemma 3 12B Instruct Google | 9 | 12.2B | 128k | - | 29 | 🤗 | +2 more | View |
Llama 3.2 Instruct 11B (Vision) Meta | 9 | 11B | 128k | $0.2 | 52 | 🤗 | View | |
Olmo 3 7B Instruct Allen Institute for AI | 8 | 7B | 65.5k | $0.1 | - | 🤗 | View | |
DeepHermes 3 - Llama-3.1 8B Preview (Non-reasoning) Nous Research | 8 | 8B | 128k | - | - | 🤗 | - | View |
Molmo2-8B Allen Institute for AI | 7 | 8.66B | 36.9k | - | - | 🤗 | View | |
LFM2 8B A1B Liquid AI | 7 | 8.34B (1.5B active at inference time) | 32.8k | - | - | 🤗 | ? | View |
Gemma 3n E4B Instruct Google | 6 | 8.39B (4B active at inference time) | 32.0k | $0.0 | 26 | 🤗 | View | |
Apertus 8B Instruct Swiss AI Initiative | 6 | 8B | 65.5k | $0.1 | 137 | 🤗 | View | |
Gemma 3n E2B Instruct Google | 5 | 5.98B (2B active at inference time) | 32.0k | - | - | 🤗 | View | |
Gemma 4 31B (Reasoning) Google | - | 31B | 256k | - | - | 🤗 | - | View |
Gemma 4 26B A4B (Reasoning) Google | - | 27B (4B active at inference time) | 256k | - | - | 🤗 | - | View |