Jamba 1.7 Large Intelligence, Performance & Price Analysis
Model summary
Intelligence
Artificial Analysis Intelligence Index
Speed
Output tokens per second
Input Price
USD per 1M tokens
Output Price
USD per 1M tokens
Verbosity
Output tokens from Intelligence Index
Metrics are compared against models of the same class:
- Non-reasoning models → compared only with other non-reasoning models
- Reasoning models → compared across both reasoning and non-reasoning
- Open weights models → compared only with other open weights models of the same size class:
- Tiny: ≤4B parameters
- Small: 4B–40B parameters
- Medium: 40B–150B parameters
- Large: >150B parameters
- Proprietary models → compared across proprietary and open weights models of the same price range, using a blended 3:1 input/output price ratio:
- <$0.15 per 1M tokens
- $0.15–$1 per 1M tokens
- >$1 per 1M tokens
| Reasoning | No This page shows the non-reasoning version of this model. A reasoning variant may also exist. |
|---|---|
| Input modality | Supports: text |
| Output modality | Supports: text |
| Knowledge cutoff | Aug 22, 2024 |
| Context window | 256k ~384 A4 pages of size 12 Arial font |
| Total parameters | 398B |
| Active parameters | 94B Number of parameters active per token during inference |
| License | Jamba Open Model License Agreement |
| Model weights | Hugging Face |
Jamba 1.7 Large is among the least intelligent models and particularly expensive when comparing to other open weight non-reasoning models of similar size. It's also faster than average and fairly concise. The model supports text input, outputs text, and has a 256k tokens context window with knowledge up to August 2024.
Jamba 1.7 Large scores 11 on the Artificial Analysis Intelligence Index, placing it at the lower end among comparable models (averaging 22). When evaluating the Intelligence Index, it generated 8.1M tokens, which is fairly concise in comparison to the average of 8.1M.
Pricing for Jamba 1.7 Large is $2.00 per 1M input tokens (expensive, average: $0.56) and $8.00 per 1M output tokens (expensive, average: $1.59). In total, it cost $965.33 to evaluate Jamba 1.7 Large on the Intelligence Index.
At 60 tokens per second, Jamba 1.7 Large is faster than average (55).
Intelligence
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Intelligence Evaluations
Openness
Artificial Analysis Openness Index: Results
Intelligence Index Comparisons
Intelligence vs. Price
Intelligence Index Token Use & Cost
Output Tokens Used to Run Artificial Analysis Intelligence Index
Cost to Run Artificial Analysis Intelligence Index
Context Window
Context Window
Pricing
Pricing: Input and Output Prices
Intelligence vs. Price (Log Scale)
Pricing Comparison of Jamba 1.7 Large API Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed
Output Speed vs. Price
Latency
Measured by Time (seconds) to First Token
Latency: Time To First Answer Token
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time
Model Size (Open Weights Models Only)
Model Size: Total and Active Parameters
Comparisons to Jamba 1.7 Large
Jamba 1.7 Large
gpt-oss-20B (high)
gpt-oss-120B (high)
GPT-5.2 (xhigh)
GPT-5.4 (xhigh)
GPT-5.4 Pro (xhigh)
GPT-5.3 Codex (xhigh)
Llama 4 Maverick
Gemini 3.1 Flash-Lite Preview
Gemini 3.1 Pro Preview
Gemini 3 Flash
Claude Sonnet 4.6 (max)
Claude Opus 4.6 (max)
Claude 4.5 Haiku
Mistral Large 3DeepSeek V3.2
Grok 4.1 Fast
Grok 4.20 Beta 0309
Nova 2.0 Pro Preview (medium)
MiniMax-M2.5
NVIDIA Nemotron 3 Super
NVIDIA Nemotron 3 Nano
Kimi K2.5
K-EXAONEMiMo-V2-Flash (Feb 2026)
K2 Think V2
Mi:dm K 2.5 ProGLM-5
Qwen3.5 397B A17B
Frequently Asked Questions
Common questions about Jamba 1.7 Large
Jamba 1.7 Large was released on July 7, 2025.
Jamba 1.7 Large was created by AI21 Labs.
Jamba 1.7 Large scores 11 on the Artificial Analysis Intelligence Index, placing it at the lower end among other open weight non-reasoning models of similar size (median: 22).
Jamba 1.7 Large generates output at 59.8 tokens per second (based on AI21 Labs's API), which is above average compared to other open weight non-reasoning models of similar size (median: 55.4 t/s).
Jamba 1.7 Large has a time to first token (TTFT) of 1.37s (based on AI21 Labs's API), which is very competitive compared to other open weight non-reasoning models of similar size (median: 1.96s).
Jamba 1.7 Large costs $2.00 per 1M input tokens (at the higher end, median: $0.60) and $8.00 per 1M output tokens (at the higher end, median: $2.33), based on AI21 Labs's API.
Jamba 1.7 Large costs $2.00 per 1M input tokens and $8.00 per 1M output tokens (based on AI21 Labs's API). For a blended rate (3:1 input to output ratio), this is $3.50 per 1M tokens. Pricing may vary by provider. Compare provider pricing
When evaluated on the Intelligence Index, Jamba 1.7 Large generated 8.1M output tokens, which is better than average compared to other open weight non-reasoning models of similar size (median: 8.1M).
No, Jamba 1.7 Large is not a reasoning model. It provides direct responses without extended chain-of-thought reasoning.
Jamba 1.7 Large supports text input.
Jamba 1.7 Large supports text output.
No, Jamba 1.7 Large does not support image input. It can only process text.
No, Jamba 1.7 Large is not multimodal. It only supports text input.
Jamba 1.7 Large has a context window of 260k tokens. This determines how much text and conversation history the model can process in a single request.
Yes, Jamba 1.7 Large is open weights. The model weights are publicly available and can be downloaded for self-hosting.
Jamba 1.7 Large has 398 billion parameters (94 billion active).
Jamba 1.7 Large is a Mixture of Experts (MoE) model with 398 billion total parameters, but only 94 billion active parameters are used during inference.
Jamba 1.7 Large is released under the Jamba Open Model License Agreement license. This license allows commercial use. View license
Jamba 1.7 Large achieves a score of 11 on the Artificial Analysis Intelligence Index. This composite benchmark evaluates models across reasoning, knowledge, mathematics, and coding.
Jamba 1.7 Large has a knowledge cutoff of August 2024. The model's training data includes information up to this date.
Yes, Jamba 1.7 Large is available via API through 1 provider. Compare API providers
Jamba 1.7 Large is available through 1 API provider. Compare providers