Allen Institute for AI has launched a newer model, Olmo 3 32B Think, we suggest considering this model instead.
For more information, see Comparison of Olmo 3 32B Think to other models and API provider benchmarks for Olmo 3 32B Think.
OLMo 2 32B Intelligence, Performance & Price Analysis
Model summary
Intelligence
Artificial Analysis Intelligence Index
Speed
Output tokens per second
Input Price
USD per 1M tokens
Output Price
USD per 1M tokens
Verbosity
Output tokens from Intelligence Index
Metrics are compared against models of the same class:
- Non-reasoning models → compared only with other non-reasoning models
- Reasoning models → compared across both reasoning and non-reasoning
- Open weights models → compared only with other open weights models of the same size class:
- Tiny: ≤4B parameters
- Small: 4B–40B parameters
- Medium: 40B–150B parameters
- Large: >150B parameters
- Proprietary models → compared across proprietary and open weights models of the same price range, using a blended 3:1 input/output price ratio:
- <$0.15 per 1M tokens
- $0.15–$1 per 1M tokens
- >$1 per 1M tokens
| Reasoning | No This page shows the non-reasoning version of this model. A reasoning variant may also exist. |
|---|---|
| Input modality | Supports: text |
| Output modality | Supports: text |
| Knowledge cutoff | Dec 1, 2025 |
| Context window | 4k ~6 A4 pages of size 12 Arial font |
| Total parameters | 32.2B |
| License | Apache 2.0 |
| Model weights | Hugging Face |
OLMo 2 32B is below average in intelligence, but well priced when comparing to other open weight non-reasoning models of similar size. The model supports text input, outputs text, and has a 4k tokens context window with knowledge up to December 2025.
OLMo 2 32B scores 11 on the Artificial Analysis Intelligence Index, placing it below average among comparable models (averaging 11). When evaluating the Intelligence Index, it generated 1.1M tokens, which is very concise in comparison to the average of 4.5M.
Pricing for OLMo 2 32B is $0.00 per 1M input tokens (competitively priced, average: $0.06) and $0.00 per 1M output tokens (competitively priced, average: $0.18). In total, it cost $0.00 to evaluate OLMo 2 32B on the Intelligence Index.
Intelligence
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Intelligence Evaluations
Openness
Artificial Analysis Openness Index: Results
Intelligence Index Comparisons
Intelligence vs. Price
Intelligence Index Token Use & Cost
Output Tokens Used to Run Artificial Analysis Intelligence Index
Cost to Run Artificial Analysis Intelligence Index
Context Window
Context Window
Pricing
Pricing: Input and Output Prices
Intelligence vs. Price (Log Scale)
Pricing Comparison of OLMo 2 32B API Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed
Output Speed vs. Price
Latency
Measured by Time (seconds) to First Token
Latency: Time To First Answer Token
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time
Model Size (Open Weights Models Only)
Model Size: Total and Active Parameters
Frequently Asked Questions
Common questions about OLMo 2 32B
OLMo 2 32B was released on March 13, 2025.
OLMo 2 32B was created by Allen Institute for AI.
OLMo 2 32B scores 11 (estimated) on the Artificial Analysis Intelligence Index, placing it below average among other open weight non-reasoning models of similar size (median: 11).
When evaluated on the Intelligence Index, OLMo 2 32B generated 1.1M output tokens, which is very competitive compared to other open weight non-reasoning models of similar size (median: 4.5M).
No, OLMo 2 32B is not a reasoning model. It provides direct responses without extended chain-of-thought reasoning.
OLMo 2 32B supports text input.
OLMo 2 32B supports text output.
No, OLMo 2 32B does not support image input. It can only process text.
No, OLMo 2 32B is not multimodal. It only supports text input.
OLMo 2 32B has a context window of 4.1k tokens. This determines how much text and conversation history the model can process in a single request.
Yes, OLMo 2 32B is open weights. The model weights are publicly available and can be downloaded for self-hosting.
OLMo 2 32B has 32.2 billion parameters.
OLMo 2 32B is released under the Apache 2.0 license. This license allows commercial use. View license
OLMo 2 32B achieves a score of 11 on the Artificial Analysis Intelligence Index. This composite benchmark evaluates models across reasoning, knowledge, mathematics, and coding.
OLMo 2 32B has a knowledge cutoff of December 2025. The model's training data includes information up to this date.
OLMo 2 32B is an open weights model that can be self-hosted. View providers
OLMo 2 32B is an open weights model that can be downloaded and self-hosted. Compare providers