Mistral has launched a newer model, Mistral Small 3.2, we suggest considering this model instead.
For more information, see Comparison of Mistral Small 3.2 to other models and API provider benchmarks for Mistral Small 3.2.

Mixtral 8x7B Instruct Intelligence, Performance & Price Analysis
Model summary
Intelligence
Artificial Analysis Intelligence Index
Speed
Output tokens per second
Input Price
USD per 1M tokens
Output Price
USD per 1M tokens
Verbosity
Output tokens from Intelligence Index
Metrics are compared against models of the same class:
- Non-reasoning models → compared only with other non-reasoning models
- Reasoning models → compared across both reasoning and non-reasoning
- Open weights models → compared only with other open weights models of the same size class:
- Tiny: ≤4B parameters
- Small: 4B–40B parameters
- Medium: 40B–150B parameters
- Large: >150B parameters
- Proprietary models → compared across proprietary and open weights models of the same price range, using a blended 3:1 input/output price ratio:
- <$0.15 per 1M tokens
- $0.15–$1 per 1M tokens
- >$1 per 1M tokens
| Reasoning | No This page shows the non-reasoning version of this model. A reasoning variant may also exist. |
|---|---|
| Input modality | Supports: This information is still being updated |
| Output modality | This information is still being updated |
| Context window | 33k ~49 A4 pages of size 12 Arial font |
| Total parameters | 46.7B |
| Active parameters | 12.9B Number of parameters active per token during inference |
| License | Apache 2.0 |
| Model weights | Hugging Face |
Mixtral 8x7B Instruct is among the least intelligent models and somewhat expensive when comparing to other open weight non-reasoning models of similar size. The model has a 33k tokens context window.
Mixtral 8x7B Instruct scores 8 on the Artificial Analysis Intelligence Index, placing it at the lower end among comparable models (averaging 13). When evaluating the Intelligence Index, it generated 1.5M tokens, which is fairly concise in comparison to the average of 3.8M.
Pricing for Mixtral 8x7B Instruct is $0.54 per 1M input tokens (somewhat expensive, average: $0.20) and $0.60 per 1M output tokens (somewhat expensive, average: $0.57). In total, it cost $1.63 to evaluate Mixtral 8x7B Instruct on the Intelligence Index.
Intelligence
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Intelligence Evaluations
Openness
Artificial Analysis Openness Index: Results
Intelligence Index Comparisons
Intelligence vs. Price
Intelligence Index Token Use & Cost
Output Tokens Used to Run Artificial Analysis Intelligence Index
Cost to Run Artificial Analysis Intelligence Index
Context Window
Context Window
Pricing
Pricing: Input and Output Prices
Intelligence vs. Price (Log Scale)
Pricing Comparison of Mixtral 8x7B Instruct API Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed
Output Speed vs. Price
Latency
Measured by Time (seconds) to First Token
Latency: Time To First Answer Token
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time
Model Size (Open Weights Models Only)
Model Size: Total and Active Parameters
Frequently Asked Questions
Common questions about Mixtral 8x7B Instruct
Mixtral 8x7B Instruct was released on December 11, 2023.
Mixtral 8x7B Instruct was created by Mistral.
Mixtral 8x7B Instruct scores 8 (estimated) on the Artificial Analysis Intelligence Index, placing it at the lower end among other open weight non-reasoning models of similar size (median: 13).
Mixtral 8x7B Instruct costs $0.54 per 1M input tokens (better than average, median: $0.52) and $0.60 per 1M output tokens (better than average, median: $0.81), based on the median across providers serving the model.
Mixtral 8x7B Instruct costs $0.54 per 1M input tokens and $0.60 per 1M output tokens (based on the median across providers serving the model). For a blended rate (3:1 input to output ratio), this is $0.54 per 1M tokens. Pricing may vary by provider. Compare provider pricing
When evaluated on the Intelligence Index, Mixtral 8x7B Instruct generated 1.5M output tokens, which is better than average compared to other open weight non-reasoning models of similar size (median: 3.8M).
No, Mixtral 8x7B Instruct is not a reasoning model. It provides direct responses without extended chain-of-thought reasoning.
Mixtral 8x7B Instruct supports text only input.
Mixtral 8x7B Instruct supports text only output.
No, Mixtral 8x7B Instruct does not support image input. It can only process text.
No, Mixtral 8x7B Instruct is not multimodal. It only supports text only input.
Mixtral 8x7B Instruct has a context window of 33k tokens. This determines how much text and conversation history the model can process in a single request.
Yes, Mixtral 8x7B Instruct is open weights. The model weights are publicly available and can be downloaded for self-hosting.
Mixtral 8x7B Instruct has 46.7 billion parameters (12.9 billion active).
Mixtral 8x7B Instruct is a Mixture of Experts (MoE) model with 46.7 billion total parameters, but only 12.9 billion active parameters are used during inference.
Mixtral 8x7B Instruct is released under the Apache 2.0 license. This license allows commercial use. View license
Mixtral 8x7B Instruct achieves a score of 8 on the Artificial Analysis Intelligence Index. This composite benchmark evaluates models across reasoning, knowledge, mathematics, and coding.
Yes, Mixtral 8x7B Instruct is available via API through 3 providers. Compare API providers
Mixtral 8x7B Instruct is available through 3 API providers. Compare providers