DeepSeek LLM 67B Chat (V1): Intelligence, Performance & Price Analysis
Analysis of DeepSeek's DeepSeek LLM 67B Chat (V1) and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
DeepSeek has launched a newer model, DeepSeek V3 0324 (Mar '25). We suggest considering this model instead of DeepSeek LLM 67B (V1). See the following pages for a comparison of DeepSeek V3 0324 (Mar '25) to other models and DeepSeek V3 0324 (Mar '25) API provider benchmarks.
Comparison Summary
Intelligence:
DeepSeek LLM 67B (V1) is of lower quality compared to average, with Intelligence Index across evaluations of 20.
Price:Speed:Latency:Context Window:DeepSeek LLM 67B (V1) has a smaller context windows than average, with a context window of 4.1k tokens.
Highlights
Intelligence
Artificial Analysis Intelligence Index; Higher is better
Estimate (independent evaluation forthcoming)
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Note: Long prompts not supported as a context window of at least 10k tokens is required
DeepSeek LLM 67B Chat (V1) Model Details
Comparisons to DeepSeek LLM 67B (V1)
o3-pro
o3
GPT-4.1
o4-mini (high)
Llama 4 Maverick
Llama 4 Scout
Gemini 2.5 Pro Preview (Jun '25)
Gemini 2.5 Flash Preview (May '25) (Reasoning)
Claude 4 Sonnet (Extended Thinking)
Claude 4 Opus (Extended Thinking)
Claude 4 Sonnet
Mistral Medium 3
DeepSeek V3 0324 (Mar '25)
DeepSeek R1 0528 (May '25)
Grok 3 mini Reasoning (high)
Nova Premier
Llama 3.1 Nemotron Ultra 253B v1 (Reasoning)
Qwen3 235B A22B (Reasoning)
GPT-4o (Nov '24)
Further details