DeepSeek LLM 67B Chat (V1): Quality, Performance & Price Analysis
Analysis of DeepSeek's DeepSeek LLM 67B Chat (V1) and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more. Click on any model to compare API providers for that model. For more details including relating to our methodology, see our FAQs.
Comparison Summary
Quality:
DeepSeek LLM 67B (V1) is of lower quality compared to average, with Quality Index across evaluations of 47.
Price:DeepSeek LLM 67B (V1) is cheaper compared to average with a price of $0.90 per 1M Tokens (blended 3:1).
DeepSeek LLM 67B (V1) Input token price: $0.90, Output token price: $0.90 per 1M Tokens.
Speed:DeepSeek LLM 67B (V1) Input token price: $0.90, Output token price: $0.90 per 1M Tokens.
DeepSeek LLM 67B (V1) is slower compared to average, with a output speed of 28.1 tokens per second.
Latency:DeepSeek LLM 67B (V1) has a lower latency compared to average, taking 0.48s to receive the first token (TTFT).
Context Window:DeepSeek LLM 67B (V1) has a smaller context windows than average, with a context window of 4.1k tokens.
Highlights
Quality
Artificial Analysis Quality Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Note: Long prompts not supported as a context window of at least 10k tokens is required
Further details