xAI has launched a newer model, Grok Beta, we suggest considering this model instead.
For more information, see Comparison of Grok Beta to other models and API provider benchmarks for Grok Beta.
Grok-1 Intelligence, Performance & Price Analysis
Model summary
Intelligence
Speed
Input Price
USD per 1M tokens
Output Price
Verbosity
Grok-1 is among the least intelligent models, but well priced when comparing to other open weight non-reasoning models of similar size. The model has a 8k tokens context window with knowledge up to October 2023.
Grok-1 scores 12 on the Artificial Analysis Intelligence Index, placing it at the lower end among comparable models (averaging 23).
Pricing for Grok-1 is $0.00 per 1M input tokens (competitively priced, average: $0.40) and $0.00 per 1M output tokens (competitively priced, average: $1.60).
| Reasoning | No This page shows the non-reasoning version of this model. A reasoning variant may also exist. |
|---|---|
| Input modality | Supports: This information is still being updated |
| Output modality | This information is still being updated |
| Knowledge cutoff | Oct 1, 2023 |
| Context window | 8k ~12 A4 pages of size 12 Arial font |
| Total parameters | 314B |
| Active parameters | 78B Number of parameters active per token during inference |
| License | Apache 2.0 |
| Model weights | Hugging Face |
Metrics are compared against models of the same class:
- Non-reasoning models → compared only with other non-reasoning models
- Reasoning models → compared across both reasoning and non-reasoning
- Open weights models → compared only with other open weights models of the same size class:
- Tiny: ≤4B parameters
- Small: 4B–40B parameters
- Medium: 40B–150B parameters
- Large: >150B parameters
- Proprietary models → compared across proprietary and open weights models of the same price range, using a blended 3:1 input/output price ratio:
- <$0.15 per 1M tokens
- $0.15–$1 per 1M tokens
- >$1 per 1M tokens
Highlights
Intelligence
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Intelligence Evaluations
Agentic real-world work tasks, (ELO-500)/2000
Agentic coding & terminal use
Agentic tool use
Long context reasoning
Knowledge
1 - hallucination rate
Reasoning & knowledge
Scientific reasoning
Coding
Instruction following
Physics reasoning
Long-horizon agentic tasks
Visual reasoning
Openness
Artificial Analysis Openness Index: Results
Intelligence Index Comparisons
Intelligence vs. Price
Intelligence Index Token Use & Cost
Output Tokens Used to Run Artificial Analysis Intelligence Index
Cost to Run Artificial Analysis Intelligence Index
Context Window
Context Window
Pricing
Pricing now includes a “Cache Hit Price” alongside Input and Output pricing, with new blend ratios.
Pricing: Cache Hit, Input, and Output
Speed
Measured by Output Speed (tokens per second)
Output Speed
Output Speed vs. Price
Latency
Measured by Time (seconds) to First Token
Latency: Time To First Answer Token
End-to-End Response Time
Seconds to output 500 tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time
Model Size (Open Weights Models Only)
Model Size: Total and Active Parameters
Frequently Asked Questions
Common questions about Grok-1
Grok-1 was released on March 17, 2024.
Grok-1 was created by xAI.
Grok-1 scores 12 (estimated) on the Artificial Analysis Intelligence Index, placing it at the lower end among other open weight non-reasoning models of similar size (median: 23).
No, Grok-1 is not a reasoning model. It provides direct responses without extended chain-of-thought reasoning.
Grok-1 supports text only input.
Grok-1 supports text only output.
No, Grok-1 does not support image input. It can only process text.
No, Grok-1 is not multimodal. It only supports text only input.
Grok-1 has a context window of 8.2k tokens. This determines how much text and conversation history the model can process in a single request.
Yes, Grok-1 is open weights. The model weights are publicly available and can be downloaded for self-hosting.
Grok-1 has 314 billion parameters (78 billion active).
Grok-1 is a Mixture of Experts (MoE) model with 314 billion total parameters, but only 78 billion active parameters are used during inference.
Grok-1 is released under the Apache 2.0 license. This license allows commercial use. View license
Grok-1 achieves a score of 12 on the Artificial Analysis Intelligence Index. This composite benchmark evaluates models across reasoning, knowledge, mathematics, and coding.
Grok-1 has a knowledge cutoff of October 2023. The model's training data includes information up to this date.
Grok-1 is an open weights model that can be self-hosted. View providers
Grok-1 is an open weights model that can be downloaded and self-hosted. Compare providers