Anthropic has launched a newer model, Claude 4.1 Opus, we suggest considering this model instead.
For more information, see Comparison of Claude 4.1 Opus to other models and API provider benchmarks for Claude 4.1 Opus.
Claude 4 Opus (Reasoning) Intelligence, Performance & Price Analysis
Model summary
Intelligence
Artificial Analysis Intelligence Index
Speed
Output tokens per second
Input Price
USD per 1M tokens
Output Price
USD per 1M tokens
Verbosity
Output tokens from Intelligence Index
Metrics are compared against models of the same class:
- Non-reasoning models → compared only with other non-reasoning models
- Reasoning models → compared across both reasoning and non-reasoning
- Open weights models → compared only with other open weights models of the same size class:
- Tiny: ≤4B parameters
- Small: 4B–40B parameters
- Medium: 40B–150B parameters
- Large: >150B parameters
- Proprietary models → compared across proprietary and open weights models of the same price range, using a blended 3:1 input/output price ratio:
- <$0.15 per 1M tokens
- $0.15–$1 per 1M tokens
- >$1 per 1M tokens
| Reasoning | Yes This page shows the reasoning version of this model. A non-reasoning variant may also exist. |
|---|---|
| Input modality | Supports: text, image |
| Output modality | Supports: text |
| Knowledge cutoff | Mar 1, 2025 |
| Context window | 200k ~300 A4 pages of size 12 Arial font |
Claude 4 Opus (Reasoning) is above average in intelligence, but particularly expensive when comparing to other models of similar price. It's also slower than average, however fairly concise. The model supports text and image input, outputs text, and has a 200k tokens context window with knowledge up to March 2025.
Claude 4 Opus (Reasoning) scores 27 on the Artificial Analysis Intelligence Index, placing it above average among comparable models (averaging 26). When evaluating the Intelligence Index, it generated 10M tokens, which is fairly concise in comparison to the average of 11M.
Pricing for Claude 4 Opus (Reasoning) is $15.00 per 1M input tokens (expensive, average: $1.55) and $75.00 per 1M output tokens (expensive, average: $10.00). In total, it cost $1471.14 to evaluate Claude 4 Opus (Reasoning) on the Intelligence Index.
At 47 tokens per second, Claude 4 Opus (Reasoning) is slower than average (74).
Intelligence
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index: Includes GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt evaluations spanning reasoning, knowledge, math & coding; Evaluation results measured independently by Artificial Analysis","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index by Open Weights / Proprietary","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index: Includes GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt evaluations spanning reasoning, knowledge, math & coding; Evaluation results measured independently by Artificial Analysis","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Intelligence Evaluations
While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Openness
Artificial Analysis Openness Index: Results
Intelligence Index Comparisons
Intelligence vs. Price
While higher intelligence models are typically more expensive, they do not all follow the same price-quality curve.
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Intelligence Index Token Use & Cost
Output Tokens Used to Run Artificial Analysis Intelligence Index
The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
Cost to Run Artificial Analysis Intelligence Index
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Context Window
Context Window
Larger context windows are relevant to RAG (Retrieval Augmented Generation) LLM workflows which typically involve reasoning and information retrieval of large amounts of data.
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
{"@context":"https://schema.org","@type":"Dataset","name":"Context Window","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Context window is the maximum number of tokens a model can accept in a single request. Higher limits allow longer prompts, documents, and more complex instructions.","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Pricing
Pricing: Input and Output Prices
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Intelligence vs. Price (Log Scale)
While higher intelligence models are typically more expensive, they do not all follow the same price-quality curve.
Artificial Analysis Intelligence Index v4.0 includes: GDPval-AA, 𝜏²-Bench Telecom, Terminal-Bench Hard, SciCode, AA-LCR, AA-Omniscience, IFBench, Humanity's Last Exam, GPQA Diamond, CritPt. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Pricing Comparison of Claude 4 Opus (Reasoning) API Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
{"@context":"https://schema.org","@type":"Dataset","name":"Output Speed","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Output speed measures tokens generated per second after the first token is received. Higher values mean faster model output and higher throughput under comparable conditions.","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Output Speed vs. Price
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Latency
Measured by Time (seconds) to First Token
Latency: Time To First Answer Token
Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time
Seconds to receive a 500 token response. Key components:
- Input time: Time to receive the first response token
- Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
- Answer time: Time to generate 500 output tokens, based on output speed
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Model Size (Open Weights Models Only)
Model Size: Total and Active Parameters
The total number of trainable weights and biases in the model, expressed in billions. These parameters are learned during training and determine the model's ability to process and generate responses.
The number of parameters actually executed during each inference forward pass, expressed in billions. For Mixture of Experts (MoE) models, a routing mechanism selects a subset of experts per token, resulting in fewer active than total parameters. Dense models use all parameters, so active equals total.
Comparisons to Claude 4 Opus
Claude 4 Opus
gpt-oss-120B (high)
gpt-oss-20B (high)
GPT-5.2 (xhigh)
GPT-5.2 Codex (xhigh)
Llama 4 Maverick
Gemini 3 Pro Preview (high)
Gemini 3 Flash
Claude Opus 4.5
Claude 4.5 Haiku
Claude 4.5 Sonnet
Claude Opus 4.6 (Adaptive)
Claude Opus 4.6
Mistral Large 3DeepSeek V3.2
Grok 4.1 Fast
Grok 4
Nova 2.0 Pro Preview (medium)
MiniMax-M2.1
MiniMax-M2.5
NVIDIA Nemotron 3 Nano
Kimi K2.5
K-EXAONEMiMo-V2-Flash (Feb 2026)
KAT-Coder-Pro V1
K2 Think V2
GLM-4.7
GLM-5
Qwen3 235B A22B 2507
FAQ
Common questions about Claude 4 Opus (Reasoning)
Claude 4 Opus (Reasoning) was released on May 22, 2025.
Claude 4 Opus (Reasoning) was created by Anthropic.
Claude 4 Opus (Reasoning) scores 27 (estimated) on the Artificial Analysis Intelligence Index, placing it above average among other reasoning models in a similar price tier (median: 26).
Claude 4 Opus (Reasoning) generates output at 47.0 tokens per second (based on Anthropic's API), which is below average compared to other reasoning models in a similar price tier (median: 74.3 t/s).
Claude 4 Opus (Reasoning) has a time to first token (TTFT) of 1.28s (based on Anthropic's API), which is somewhat higher than average compared to other reasoning models in a similar price tier (median: 1.18s).
Claude 4 Opus (Reasoning) costs $15.00 per 1M input tokens (at the higher end, median: $1.55) and $75.00 per 1M output tokens (at the higher end, median: $10.00), based on Anthropic's API.
Claude 4 Opus (Reasoning) costs $15.00 per 1M input tokens and $75.00 per 1M output tokens (based on Anthropic's API). For a blended rate (3:1 input to output ratio), this is $30.00 per 1M tokens. Pricing may vary by provider. Compare provider pricing →
When evaluated on the Intelligence Index, Claude 4 Opus (Reasoning) generated 10M output tokens, which is better than average compared to other reasoning models in a similar price tier (median: 11M).
Yes, Claude 4 Opus (Reasoning) is a reasoning model. It uses extended thinking or chain-of-thought reasoning to work through complex problems before providing an answer.
Claude 4 Opus (Reasoning) supports text and image input.
Claude 4 Opus (Reasoning) supports text output.
Yes, Claude 4 Opus (Reasoning) supports image input and can analyze, describe, and answer questions about images.
Yes, Claude 4 Opus (Reasoning) is multimodal. It can process text and image input and generate text output.
Claude 4 Opus (Reasoning) has a context window of 200k tokens. This determines how much text and conversation history the model can process in a single request.
No, Claude 4 Opus (Reasoning) is proprietary. The model weights are not publicly available.
Claude 4 Opus (Reasoning) is a proprietary model and Anthropic has not disclosed the model size or parameter count.
Claude 4 Opus (Reasoning) achieves a score of 27 on the Artificial Analysis Intelligence Index. This composite benchmark evaluates models across reasoning, knowledge, mathematics, and coding.
Claude 4 Opus (Reasoning) has a knowledge cutoff of March 2025. The model's training data includes information up to this date.
Yes, Claude 4 Opus (Reasoning) is available via API through 3 providers. Compare API providers →
Claude 4 Opus (Reasoning) is available through 3 API providers. Compare providers →