For more information, see Comparison of GPT-4o (Mar) to other models and API provider benchmarks for GPT-4o (Mar).
Artificial Analysis Intelligence Index
Output tokens per second
USD per 1M tokens
USD per 1M tokens
Output tokens from Intelligence Index
Metrics are compared against models of the same class:
| Reasoning | No This page shows the non-reasoning version of this model. A reasoning variant may also exist. |
|---|---|
| Input modality | Supports: image This information is still being updated |
| Output modality | This information is still being updated |
| Knowledge cutoff | Oct 1, 2023 |
| Context window | 128k ~192 A4 pages of size 12 Arial font |
GPT-4o (Nov '24) is below average in intelligence and somewhat expensive when comparing to other non-reasoning models of similar price. It's also notably fast and highly concise. The model supports image input, and has a 128k tokens context window with knowledge up to October 2023.
GPT-4o (Nov '24) scores 27 on the Artificial Analysis Intelligence Index, placing it below average among comparable models (averaging 30). When evaluating the Intelligence Index, it generated 5.7M tokens, which is very concise in comparison to the average of 7.5M.
Pricing for GPT-4o (Nov '24) is $2.50 per 1M input tokens (somewhat expensive, average: $2.00) and $10.00 per 1M output tokens (moderately priced, average: $10.00). In total, it cost $202.33 to evaluate GPT-4o (Nov '24) on the Intelligence Index.
At 101 tokens per second, GPT-4o (Nov '24) is notably fast (44).
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Represents the average of coding evaluations in the Artificial Analysis Intelligence Index. Currently includes: LiveCodeBench, SciCode, Terminal-Bench Hard. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Represents the average of agentic capabilities benchmarks in the Artificial Analysis Intelligence Index (Terminal-Bench Hard, 𝜏²-Bench Telecom).
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Agentic Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Represents the average of agentic capabilities benchmarks in the Artificial Analysis Intelligence Index (Terminal-Bench Hard, 𝜏²-Bench Telecom)","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Indicates whether the model weights are available. Models are labelled as 'Commercial Use Restricted' if the weights are available but commercial use is limited (typically requires obtaining a paid license).
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index by Open Weights vs Proprietary","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v3.0 incorporates 10 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index by Model Type","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v3.0 incorporates 10 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
While model intelligence generally translates across use cases, specific evaluations may be more relevant for certain use cases.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
While higher intelligence models are typically more expensive, they do not all follow the same price-quality curve.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
There is a trade-off between model quality and output speed, with higher intelligence models typically having lower output speed.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Seconds to receive a 500 token response. Key components:
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
The number of tokens required to run all evaluations in the Artificial Analysis Intelligence Index (excluding repeats).
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
The cost to run the evaluations in the Artificial Analysis Intelligence Index, calculated using the model's input and output token pricing and the number of tokens used across evaluations (excluding repeats).
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Larger context windows are relevant to RAG (Retrieval Augmented Generation) LLM workflows which typically involve reasoning and information retrieval of large amounts of data.
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
{"@context":"https://schema.org","@type":"Dataset","name":"Context Window","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Context Window: Tokens Limit; Higher is better","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Price for 1,000 images at a resolution of 1 Megapixel (1024 x 1024) processed by the model.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
While higher intelligence models are typically more expensive, they do not all follow the same price-quality curve.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Measured by Output Speed (tokens per second)
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
{"@context":"https://schema.org","@type":"Dataset","name":"Output Speed","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Output Tokens per Second; Higher is better","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Measured by Time (seconds) to First Token
Time to first answer token received, in seconds, after API request sent. For reasoning models, this includes the 'thinking' time of the model before providing an answer. For models which do not support streaming, this represents time to receive the completion.
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
{"@context":"https://schema.org","@type":"Dataset","name":"Latency: Time To First Token","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Seconds to First Token Received; Lower is better","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.

Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.
Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
Seconds to receive a 500 token response. Key components:
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Length of tokens provided in the request. See Prompt Options above to see benchmarks of different input prompt lengths across other charts.
Seconds to receive a 500 token response considering input processing time, 'thinking' time of reasoning models, and output speed.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
Seconds to receive a 500 token response. Key components:
Median measurement per day, based on 8 measurements each day at different times. Labels represent start of week's measurements.
Figures represent performance of the model's first-party API (e.g. OpenAI for o1) or the median across providers where a first-party API is not available (e.g. Meta's Llama models).
The total number of trainable weights and biases in the model, expressed in billions. These parameters are learned during training and determine the model's ability to process and generate responses.
The number of parameters actually executed during each inference forward pass, expressed in billions. For Mixture of Experts (MoE) models, a routing mechanism selects a subset of experts per token, resulting in fewer active than total parameters. Dense models use all parameters, so active equals total.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
The number of parameters actually executed during each inference forward pass, expressed in billions. For Mixture of Experts (MoE) models, a routing mechanism selects a subset of experts per token, resulting in fewer active than total parameters. Dense models use all parameters, so active equals total.
Combination metric covering multiple dimensions of intelligence - the simplest way to compare how smart models are. Version 3.0 was released in September 2025 and includes: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom. See Intelligence Index methodology for further details, including a breakdown of each evaluation and how we run them.
The total number of trainable weights and biases in the model, expressed in billions. These parameters are learned during training and determine the model's ability to process and generate responses.
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Intelligence Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Artificial Analysis Intelligence Index v3.0 incorporates 10 evaluations: MMLU-Pro, GPQA Diamond, Humanity's Last Exam, LiveCodeBench, SciCode, AIME 2025, IFBench, AA-LCR, Terminal-Bench Hard, 𝜏²-Bench Telecom","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}
{"@context":"https://schema.org","@type":"Dataset","name":"Artificial Analysis Coding Index","creator":{"@type":"Organization","name":"Artificial Analysis","url":"https://artificialanalysis.ai"},"description":"Represents the average of coding benchmarks in the Artificial Analysis Intelligence Index (LiveCodeBench, SciCode, Terminal-Bench Hard)","measurementTechnique":"Independent test run by Artificial Analysis on dedicated hardware.","spatialCoverage":"Worldwide","keywords":["analytics","llm","AI","benchmark","model","gpt","claude"],"license":"https://creativecommons.org/licenses/by/4.0/","isAccessibleForFree":true,"citation":"Artificial Analysis (2025). LLM benchmarks dataset. https://artificialanalysis.ai","data":""}