OpenAI: Models Intelligence, Performance & Price
Analysis of OpenAI's models across key metrics including quality, price, output speed, latency, context window & more. This analysis is intended to support you in choosing the best model provided by OpenAI for your use-case. For more details including relating to our methodology, see our FAQs. Models analyzed: o3-mini, o1-preview, o1-mini, GPT-4o (Aug '24), GPT-4o (May '24), GPT-4o (Nov '24), GPT-4o mini, GPT-4o (ChatGPT), GPT-4.5 (Preview), and GPT-4.
Link:
OpenAI Model Comparison Summary
Intelligence:
o3-miniĀ andĀ
o1-miniĀ are the highest quality models offered by OpenAI, followed by
GPT-4o (Nov '24),
GPT-4o (ChatGPT) &
GPT-4o (Aug '24).Output Speed (tokens/s):
o1-mini (201 t/s)Ā andĀ
o3-mini (179 t/s)Ā are the fastest models offered by OpenAI, followed by
o1-preview,
GPT-4o mini &
GPT-4o (ChatGPT).Latency (seconds):
GPT-4o (Aug '24) (0.44s)Ā and Ā
GPT-4o mini (0.46s)Ā are the lowest latency models offered by OpenAI, followed by
GPT-4o (May '24),
GPT-4o (ChatGPT) &
GPT-4o (Nov '24).Blended Price ($/M tokens):
GPT-4o mini ($0.26)Ā andĀ
o3-mini ($1.93)Ā are the cheapest models offered by OpenAI, followed by
o1-mini,
GPT-4o (Nov '24) &
GPT-4o (Aug '24).Context Window Size:
o3-mini (200k)Ā andĀ
o1-mini (128k)Ā are the largest context window models offered by OpenAI, followed by
GPT-4o (Nov '24),
GPT-4o (ChatGPT) &
GPT-4o (Aug '24).
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Features | Model Intelligence | Price | Output tokens/s | Latency | |||
---|---|---|---|---|---|---|---|
Further Analysis | |||||||
o3-mini | 200k | 63 | $1.93 | 179.1 | 13.38 | ||
o1-mini | 128k | 54 | $1.93 | 200.6 | 10.77 | ||
GPT-4o (Nov '24) | 128k | 41 | $4.38 | 49.7 | 0.52 | ||
GPT-4o (ChatGPT) | 128k | 41 | $7.50 | 100.3 | 0.50 | ||
GPT-4o (Aug '24) | 128k | 41 | $4.38 | 51.7 | 0.44 | ||
GPT-4o (May '24) | 128k | 41 | $7.50 | 54.7 | 0.49 | ||
GPT-4o mini | 128k | 36 | $0.26 | 106.8 | 0.46 | ||
o1-preview | 128k | $26.25 | 110.6 | 27.59 | |||
GPT-4.5 (Preview) | 128k | $93.75 | 37.4 | 1.30 | |||
GPT-4 | 8k | $37.50 | 27.6 | 1.24 |
Key definitions
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Output Price: Price per token generated by the model (received from the API), represented as USD per million Tokens.
Input Price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Time period: Metrics are 'live' and are based on the past 72 hours of measurements, measurements are taken 8 times a day for single requests and 2 times per day for parallel requests.