Stay connected with us on X, Discord, and LinkedIn to stay up to date with future analysis

Comparisons of Medium Open Source AI Models (40B-150B)

Open source AI models with between 40B to 150B parameters. Models are considered Open Source (also commonly referred to as open weights) where their weights are accessible to download. This allows self-hosting on your own infrastructure and enables customizing the model such as through fine-tuning. Click on any model to see detailed metrics. For more details including relating to our methodology, see our FAQs.

Alibaba logoQwen3.5 122B A10B and OpenAI logogpt-oss-120B (high) are the highest intelligence Medium open source models, defined as those with 40B-150B parameters, followed by Alibaba logoQwen3 Coder Next & Alibaba logoQwen3 Next 80B A3B.

Intelligence
Artificial Analysis Intelligence Index; Higher is better
Estimate (independent evaluation forthcoming)
Total Parameters
Trainable parameters in billions

Openness

Artificial Analysis Openness Index: Results

Openness Index assesses model openness on a 0 to 100 normalized scale (higher is more open)

Intelligence

Artificial Analysis Intelligence Index

Artificial Analysis Intelligence Index v4.0 incorporates 10 evaluations: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt
Estimate (independent evaluation forthcoming)

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index: Includes GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt evaluations spanning reasoning, knowledge, math & coding; Evaluation results measured independently by Artificial Analysis","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://artificialanalysis.ai/docs/legal/Terms-of-Use.pdf","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}

Intelligence Evaluations

Intelligence evaluations measured independently by Artificial Analysis; Higher is better
Results claimed by AI Lab (not yet independently verified)
GDPval-AA (Agentic Real-World Work Tasks, (ELO-500)/2000)
Terminal-Bench Hard (Agentic Coding & Terminal Use)
𝜏²-Bench Telecom (Agentic Tool Use)
AA-LCR (Long Context Reasoning)
AA-Omniscience Accuracy (Knowledge)
AA-Omniscience Non-Hallucination Rate (1 - Hallucination Rate)
Humanity's Last Exam (Reasoning & Knowledge)
GPQA Diamond (Scientific Reasoning)
SciCode (Coding)
IFBench (Instruction Following)
CritPt (Physics Reasoning)
MMMU Pro (Visual Reasoning)

While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

Size

Model Size: Total and Active Parameters

Comparison between total model parameters and parameters active during inference
Active Parameters
Passive Parameters

The total number of trainable weights and biases in the model, expressed in billions. These parameters are learned during training and determine the model's ability to process and generate responses.

The number of parameters actually executed during each inference forward pass, expressed in billions. For Mixture of Experts (MoE) models, a routing mechanism selects a subset of experts per token, resulting in fewer active than total parameters. Dense models use all parameters, so active equals total.

Intelligence vs. Active Parameters

Active Parameters at Inference Time; Artificial Analysis Intelligence Index
Most attractive quadrant
Alibaba
InclusionAI
MBZUAI Institute of Foundation Models
Meta
Nous Research
NVIDIA
OpenAI
Prime Intellect
Z AI

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

The number of parameters actually executed during each inference forward pass, expressed in billions. For Mixture of Experts (MoE) models, a routing mechanism selects a subset of experts per token, resulting in fewer active than total parameters. Dense models use all parameters, so active equals total.

Intelligence vs. Total Parameters

Artificial Analysis Intelligence Index; Size in Parameters (Billions)
Most attractive quadrant
Alibaba
InclusionAI
MBZUAI Institute of Foundation Models
Meta
Nous Research
NVIDIA
OpenAI
Prime Intellect
Z AI

Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.

The total number of trainable weights and biases in the model, expressed in billions. These parameters are learned during training and determine the model's ability to process and generate responses.

Context Window

Context Window

Context Window: Tokens Limit; Higher is better

Larger context windows are relevant to RAG (Retrieval Augmented Generation) LLM workflows which typically involve reasoning and information retrieval of large amounts of data.

Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

{"@context":"https://schema.org","@type":"Dataset","name":"Context Window","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Context window is the maximum number of tokens a model can accept in a single request. Higher limits allow longer prompts, documents, and more complex instructions.","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://artificialanalysis.ai/docs/legal/Terms-of-Use.pdf","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}

Further details
WeightsProvider
Benchmarks
Alibaba logo
Qwen3.5 122B A10B (Reasoning)
Alibaba
42
125B
(10B active at inference time)
262k
$1.1
170
🤗
Novita
Alibaba Cloud
View
OpenAI logo
gpt-oss-120B (high)
OpenAI
33
117B
(5.1B active at inference time)
131k
$0.3
333
🤗
?
Cloudflare
Groq
+19 more
View
Alibaba logo
Qwen3 Coder Next
Alibaba
28
79.7B
(3B active at inference time)
256k
$0.5
141
🤗
Together.ai
Novita
Parasail
View
Alibaba logo
Qwen3 Next 80B A3B (Reasoning)
Alibaba
27
80B
(3B active at inference time)
262k
$1.9
136
🤗
Hyperbolic
Google
Novita
+4 more
View
MBZUAI Institute of Foundation Models logo
K2-V2 (high)
MBZUAI Institute of Foundation Models
25
70B
512k
-
-
🤗
-
View
OpenAI logo
gpt-oss-120B (low)
OpenAI
24
117B
(5.1B active at inference time)
131k
$0.3
342
🤗
DeepInfra
Eigen AI
Groq
+17 more
View
MBZUAI Institute of Foundation Models logo
K2 Think V2
MBZUAI Institute of Foundation Models
24
70B
262k
-
-
Not available
-
View
NVIDIA logo
Llama Nemotron Super 49B v1.5 (Reasoning)
NVIDIA
24
49B
128k
$0.2
79
🤗
DeepInfra
View
Prime Intellect logo
INTELLECT-3
Prime Intellect
24
107B
131k
-
-
🤗
-
View
Alibaba logo
Qwen3 Next 80B A3B Instruct
Alibaba
24
80B
(3B active at inference time)
262k
$0.9
136
🤗
DeepInfra
Hyperbolic
Novita
+4 more
View
Z AI logo
GLM-4.6V (Reasoning)
Z AI
21
108B
128k
$0.5
79
🤗
SiliconFlow
DeepInfra
Parasail
+1 more
View
MBZUAI Institute of Foundation Models logo
K2-V2 (medium)
MBZUAI Institute of Foundation Models
21
70B
512k
-
-
🤗
-
View
InclusionAI logo
Ring-flash-2.0
InclusionAI
21
103B
(6.1B active at inference time)
128k
$0.2
89
🤗
SiliconFlow
View
Nous Research logo
Hermes 4 - Llama-3.1 70B (Reasoning)
Nous Research
20
70.6B
128k
$0.2
80
🤗
Nebius
View
InclusionAI logo
Ling-flash-2.0
InclusionAI
20
103B
(6.1B active at inference time)
128k
$0.2
76
🤗
SiliconFlow
View
Mistral logo
Devstral 2
Mistral
19
125B
256k
-
85
🤗
Mistral
View
NVIDIA logo
Llama 3.3 Nemotron Super 49B v1 (Reasoning)
NVIDIA
18
49B
128k
-
-
🤗
-
View
MBZUAI Institute of Foundation Models logo
K2-V2 (low)
MBZUAI Institute of Foundation Models
16
70B
512k
-
-
🤗
-
View
Z AI logo
GLM-4.6V (Non-reasoning)
Z AI
16
108B
128k
$0.5
30
🤗
Parasail
SiliconFlow
Novita
View
DeepSeek logo
DeepSeek R1 Distill Llama 70B
DeepSeek
16
70B
128k
$0.9
60
🤗
SambaNova
DeepInfra
Scaleway
View
Cohere logo
Command A
Cohere
15
111B
256k
$4.4
71
🤗
Microsoft Azure
Cohere
View
NVIDIA logo
Llama Nemotron Super 49B v1.5 (Non-reasoning)
NVIDIA
15
49B
128k
$0.2
76
🤗
DeepInfra
View
Meta logo
Llama 3.3 Instruct 70B
Meta
14
70B
128k
$0.6
98
🤗
Databricks
Parasail
Amazon Bedrock
+18 more
View
Kimi logo
Kimi Linear 48B A3B Instruct
Kimi
14
49.1B
(3B active at inference time)
1.00M
-
-
🤗
-
View
NVIDIA logo
Llama 3.3 Nemotron Super 49B v1 (Non-reasoning)
NVIDIA
14
49B
128k
-
-
🤗
-
View
Nous Research logo
Hermes 4 - Llama-3.1 70B (Non-reasoning)
Nous Research
14
70.6B
128k
$0.2
73
🤗
Nebius
View
Meta logo
Llama 4 Scout
Meta
14
109B
(17B active at inference time)
10.0M
$0.3
130
🤗
Eigen AI
Google
DeepInfra
+6 more
View
NVIDIA logo
Llama 3.1 Nemotron Instruct 70B
NVIDIA
14
70B
128k
$1.2
33
🤗
DeepInfra
View
Meta logo
Llama 3.2 Instruct 90B (Vision)
Meta
12
90B
128k
$0.7
43
🤗
Google
Amazon Bedrock
Microsoft Azure
+1 more
View
AI21 Labs logo
Jamba 1.7 Mini
AI21 Labs
11
52B
(12B active at inference time)
258k
-
-
🤗
-
View