Eigen AI: Models Intelligence, Performance & Price
Analysis of Eigen AI's models across key metrics including quality, price, output speed, latency, context window & more. This analysis is intended to support you in choosing the best model provided by Eigen AI for your use-case.
Most Intelligent
Intelligence index
Total 30 models
Fastest
Output speed
Total 30 models
Lowest Price
Blended price (per 1M tokens)
Total 30 models
Eigen AI offers 30 models, each with different intelligence, performance, and pricing characteristics. Below is a comparison of the key metrics across models.
- For intelligence, the top models on Eigen AI are GLM-5 (50), Kimi K2.5 (47), Qwen3.5 397B A17B (45).
- For output speed, the fastest models are gpt-oss-120B (high) (851 t/s), gpt-oss-120B (low) (701 t/s), Kimi K2.5 (419 t/s). Speed varies significantly across models, with a 140% difference between the fastest and slowest.
- For latency, Qwen3 30B (0.65s), Qwen3 30B (0.67s), Qwen3 8B (0.69s) offer the lowest time to first token.
- For pricing, Qwen3 VL 30B A3B ($0.08), Qwen3 30B ($0.08), Qwen3 8B ($0.08) offer the lowest blended prices per 1M tokens.
- For context window size, Llama 4 Maverick (1m), Qwen3.5 397B A17B (262k), Qwen3.5 397B A17B (262k) support the largest context windows on Eigen AI.
Intelligence Evaluations
Artificial Analysis Intelligence Index
Intelligence Evaluations
Intelligence vs. Price
Context Window
Context Window
JSON Mode & Function Calling
Function (Tool) Calling & JSON Mode
| Models | Function calling | JSON Mode |
|---|---|---|
Pricing
Intelligence vs. Price
Performance Summary
Output Speed vs. Price
Speed
Measured by Output Speed (tokens per second)
Output Speed
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time vs. Price
Key definitions
Frequently Asked Questions
Common questions about Eigen AI
Eigen AI offers 30 models that we track: GLM-5, Kimi K2.5, Qwen3.5 397B A17B, MiniMax-M2.5, DeepSeek V3.2, GLM-5, Qwen3.5 397B A17B, Kimi K2.5, DeepSeek V3.1 Terminus, gpt-oss-120B (high), DeepSeek V3.2, Qwen3 235B A22B 2507, DeepSeek V3.1 Terminus, DeepSeek V3.1, DeepSeek V3.1, Qwen3 Next 80B A3B, Qwen3 235B 2507, Qwen3 Coder 480B, gpt-oss-120B (low), Qwen3 VL 235B A22B, Qwen3 VL 30B A3B, Llama 4 Maverick, Qwen3 VL 30B A3B, Qwen3 30B, Llama 3.3 70B, Llama 4 Scout, Qwen3 8B, Qwen3 30B, Llama 3.1 8B, and Qwen3 8B.
The most intelligent model available on Eigen AI is GLM-5 with an Intelligence Index score of 50.
The fastest model on Eigen AI by output speed is gpt-oss-120B (high) at 851.1 tokens per second.
The model with the lowest time to first token on Eigen AI is Qwen3 30B at 0.65s. Lower latency means faster initial response time.
The most affordable model on Eigen AI by blended price is Qwen3 VL 30B A3B at $0.08 per 1M tokens (3:1 input to output ratio).
Prices on Eigen AI vary up to 19x across models, from $0.08 per 1M tokens for Qwen3 VL 30B A3B to $1.55 per 1M tokens for GLM-5.
Yes, Eigen AI offers an OpenAI-compatible API, making it easy to switch from OpenAI or use existing OpenAI SDK integrations.
28 of 30 models on Eigen AI support JSON mode for structured output.
19 of 30 models on Eigen AI support function calling (tool use).
Yes, Eigen AI offers 14 reasoning models: GLM-5, Kimi K2.5, Qwen3.5 397B A17B, MiniMax-M2.5, DeepSeek V3.2, DeepSeek V3.1 Terminus, gpt-oss-120B (high), Qwen3 235B A22B 2507, DeepSeek V3.1, Qwen3 Next 80B A3B, gpt-oss-120B (low), Qwen3 VL 30B A3B, Qwen3 30B, and Qwen3 8B. Reasoning models use extended thinking to work through complex problems before providing an answer.
Yes, all 30 models on Eigen AI are open weight models.
Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.
When choosing a model on Eigen AI, consider: intelligence (for quality-sensitive tasks), output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and features like context window size, JSON mode, or function calling support.