DeepSeek R1 Distill Llama 70B Intelligence, Performance & Price Analysis
Model summary
Intelligence
Artificial Analysis Intelligence Index
Speed
Output tokens per second
Input Price
USD per 1M tokens
Output Price
USD per 1M tokens
Verbosity
Output tokens from Intelligence Index
Metrics are compared against models of the same class:
- Non-reasoning models → compared only with other non-reasoning models
- Reasoning models → compared across both reasoning and non-reasoning
- Open weights models → compared only with other open weights models of the same size class:
- Tiny: ≤4B parameters
- Small: 4B–40B parameters
- Medium: 40B–150B parameters
- Large: >150B parameters
- Proprietary models → compared across proprietary and open weights models of the same price range, using a blended 3:1 input/output price ratio:
- <$0.15 per 1M tokens
- $0.15–$1 per 1M tokens
- >$1 per 1M tokens
| Reasoning | Yes This page shows the reasoning version of this model. A non-reasoning variant may also exist. |
|---|---|
| Input modality | Supports: text |
| Output modality | Supports: text |
| Context window | 128k ~192 A4 pages of size 12 Arial font |
| Total parameters | 70B |
| License | LLAMA 3.3 COMMUNITY LICENSE AGREEMENT |
| Model weights | Hugging Face |
DeepSeek R1 Distill Llama 70B is above average in intelligence, but particularly expensive when comparing to other open weight models of similar size. It's also slower than average and somewhat verbose. The model supports text input, outputs text, and has a 128k tokens context window.
DeepSeek R1 Distill Llama 70B scores 16 on the Artificial Analysis Intelligence Index, placing it above average among comparable models (averaging 15). When evaluating the Intelligence Index, it generated 24M tokens, which is somewhat verbose in comparison to the average of 7.3M.
Pricing for DeepSeek R1 Distill Llama 70B is $0.70 per 1M input tokens (expensive, average: $0.17) and $1.05 per 1M output tokens (somewhat expensive, average: $0.57). In total, it cost $50.62 to evaluate DeepSeek R1 Distill Llama 70B on the Intelligence Index.
At 59 tokens per second, DeepSeek R1 Distill Llama 70B is slower than average (80).
Intelligence
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Intelligence Evaluations
Openness
Artificial Analysis Openness Index: Results
Intelligence Index Comparisons
Intelligence vs. Price
Intelligence Index Token Use & Cost
Output Tokens Used to Run Artificial Analysis Intelligence Index
Cost to Run Artificial Analysis Intelligence Index
Context Window
Context Window
Pricing
Pricing: Input and Output Prices
Intelligence vs. Price (Log Scale)
Pricing Comparison of DeepSeek R1 Distill Llama 70B API Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed
Output Speed vs. Price
Latency
Measured by Time (seconds) to First Token
Latency: Time To First Answer Token
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time
Model Size (Open Weights Models Only)
Model Size: Total and Active Parameters
Comparisons to DeepSeek R1 Distill Llama 70B
DeepSeek R1 Distill Llama 70B
gpt-oss-20B (high)
gpt-oss-120B (high)
GPT-5.2 (xhigh)
GPT-5.4 (xhigh)
GPT-5.4 Pro (xhigh)
GPT-5.3 Codex (xhigh)
Llama 4 Maverick
Gemini 3.1 Flash-Lite Preview
Gemini 3.1 Pro Preview
Gemini 3 Flash
Claude 4.5 Haiku
Claude Sonnet 4.6 (max)
Claude Opus 4.6 (max)
Mistral Large 3DeepSeek V3.2
Grok 4.20 Beta 0309
Grok 4.1 Fast
Nova 2.0 Pro Preview (medium)
MiniMax-M2.5
NVIDIA Nemotron 3 Super
NVIDIA Nemotron 3 Nano
Kimi K2.5
K-EXAONEMiMo-V2-Flash (Feb 2026)
K2 Think V2
Mi:dm K 2.5 ProGLM-5
Qwen3.5 397B A17B
Frequently Asked Questions
Common questions about DeepSeek R1 Distill Llama 70B
DeepSeek R1 Distill Llama 70B was released on January 20, 2025.
DeepSeek R1 Distill Llama 70B was created by DeepSeek.
DeepSeek R1 Distill Llama 70B scores 16 (estimated) on the Artificial Analysis Intelligence Index, placing it above average among other open weight models of similar size (median: 15).
DeepSeek R1 Distill Llama 70B generates output at 59.4 tokens per second (based on the median across providers serving the model), which is below average compared to other open weight models of similar size (median: 79.5 t/s).
DeepSeek R1 Distill Llama 70B has a time to first token (TTFT) of 2.82s (based on the median across providers serving the model), which is at the higher end compared to other open weight models of similar size (median: 1.57s).
DeepSeek R1 Distill Llama 70B costs $0.70 per 1M input tokens (at the higher end, median: $0.38) and $1.05 per 1M output tokens (somewhat higher than average, median: $0.85), based on the median across providers serving the model.
DeepSeek R1 Distill Llama 70B costs $0.70 per 1M input tokens and $1.05 per 1M output tokens (based on the median across providers serving the model). For a blended rate (3:1 input to output ratio), this is $0.88 per 1M tokens. Pricing may vary by provider. Compare provider pricing
When evaluated on the Intelligence Index, DeepSeek R1 Distill Llama 70B generated 24M output tokens, which is somewhat higher than average compared to other open weight models of similar size (median: 7.3M).
Yes, DeepSeek R1 Distill Llama 70B is a reasoning model. It uses extended thinking or chain-of-thought reasoning to work through complex problems before providing an answer.
DeepSeek R1 Distill Llama 70B supports text input.
DeepSeek R1 Distill Llama 70B supports text output.
No, DeepSeek R1 Distill Llama 70B does not support image input. It can only process text.
No, DeepSeek R1 Distill Llama 70B is not multimodal. It only supports text input.
DeepSeek R1 Distill Llama 70B has a context window of 130k tokens. This determines how much text and conversation history the model can process in a single request.
Yes, DeepSeek R1 Distill Llama 70B is open weights. The model weights are publicly available and can be downloaded for self-hosting.
DeepSeek R1 Distill Llama 70B has 70 billion parameters.
DeepSeek R1 Distill Llama 70B is released under the LLAMA 3.3 COMMUNITY LICENSE AGREEMENT license. This license allows commercial use. View license
DeepSeek R1 Distill Llama 70B achieves a score of 16 on the Artificial Analysis Intelligence Index. This composite benchmark evaluates models across reasoning, knowledge, mathematics, and coding.
Yes, DeepSeek R1 Distill Llama 70B is available via API through 3 providers. Compare API providers
DeepSeek R1 Distill Llama 70B is available through 3 API providers. Compare providers