Novita: Models Intelligence, Performance & Price
Analysis of Novita's models across key metrics including quality, price, output speed, latency, context window & more. This analysis is intended to support you in choosing the best model provided by Novita for your use-case.
Most Intelligent
Intelligence index
Total 71 models
Fastest
Output speed
Total 71 models
Lowest Price
Blended price (per 1M tokens)
Total 71 models
Novita offers 71 models, each with different intelligence, performance, and pricing characteristics. Below is a comparison of the key metrics across models.
- For intelligence, the top models on Novita are GLM-5 FP8 (50), MiniMax-M2.7 (FP8) (50), Kimi K2.5 (47).
- For output speed, the fastest models are Qwen3 Next 80B A3B (201 t/s), Qwen3.5 35B A3B (192 t/s), Qwen3 Coder Next (FP8) (172 t/s). Speed varies significantly across models, with a 57% difference between the fastest and slowest.
- For latency, gpt-oss-120B (low) (0.81s), Llama 4 Scout (0.83s), gpt-oss-120B (high) (0.83s) offer the lowest time to first token.
- For pricing, Llama 3.1 8B ($0.03), gpt-oss-20B (high) ($0.07), gpt-oss-20B (low) ($0.07) offer the lowest blended prices per 1M tokens. Prices vary up to 3.6x across models.
- For context window size, Llama 4 Maverick (FP8) (1m), MiniMax M1 80k (1m), Kimi K2.5 (262k) support the largest context windows on Novita.
Intelligence Evaluations
Artificial Analysis Intelligence Index
Intelligence Evaluations
Intelligence vs. Price
Context Window
Context Window
JSON Mode & Function Calling
Function (Tool) Calling & JSON Mode
Pricing
Intelligence vs. Price
Performance Summary
Output Speed vs. Price
Speed
Measured by Output Speed (tokens per second)
Output Speed
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time vs. Price
Key definitions
Frequently Asked Questions
Common questions about Novita
Novita offers 71 models that we track: GLM-5, MiniMax-M2.7, Kimi K2.5, Qwen3.5 397B A17B, GLM-4.7, Qwen3.5 27B, MiniMax-M2.5, DeepSeek V3.2, Qwen3.5 122B A10B, Kimi K2 Thinking, GLM-5, Qwen3.5 397B A17B, MiniMax-M2.1, Gemma 4 31B, Kimi K2.5, Qwen3.5 35B A3B, MiniMax-M2, KAT-Coder-Pro V1, GLM-4.7, DeepSeek V3.1 Terminus, gpt-oss-120B (high), DeepSeek V3.2 Exp, GLM-4.6, DeepSeek V3.2, Qwen3 Max, Gemma 4 26B A4B, Kimi K2 0905, GLM-4.6, GLM-4.7-Flash, Qwen3 235B A22B 2507, DeepSeek V3.1 Terminus, DeepSeek V3.2 Exp, Qwen3 Coder Next, DeepSeek V3.1, DeepSeek V3.1, Qwen3 VL 235B A22B, DeepSeek R1 0528, Qwen3 Next 80B A3B, GLM-4.5, Kimi K2, Qwen3 235B 2507, Qwen3 Coder 480B, gpt-oss-120B (low), gpt-oss-20B (high), MiniMax M1 80k, GLM-4.6V, DeepSeek V3 0324, GLM-4.7-Flash, gpt-oss-20B (low), Qwen3 VL 235B A22B, Qwen3 Next 80B A3B, Qwen3 VL 30B A3B, DeepSeek R1 (Jan), DeepSeek R1 (Jan), Llama 4 Maverick, GLM-4.6V, Qwen3 235B, Qwen3 32B, DeepSeek V3 (Dec), DeepSeek V3 (Dec), Qwen3 VL 30B A3B, Qwen3 30B, GLM-4.5V, ERNIE 4.5 300B A47B, Llama 3.3 70B, Llama 4 Scout, GLM-4.5V, Llama 3.1 8B, Gemma 3 27B, Llama 3.2 1B, and Qwen3 32B.
The most intelligent model available on Novita is GLM-5 with an Intelligence Index score of 50.
The fastest model on Novita by output speed is Qwen3 Next 80B A3B at 200.9 tokens per second.
The model with the lowest time to first token on Novita is gpt-oss-120B (low) at 0.81s. Lower latency means faster initial response time.
The most affordable model on Novita by blended price is Llama 3.1 8B at $0.03 per 1M tokens (3:1 input to output ratio).
Prices on Novita vary up to 145x across models, from $0.03 per 1M tokens for Llama 3.1 8B to $4.00 per 1M tokens for DeepSeek R1 (Jan).
Yes, Novita offers an OpenAI-compatible API, making it easy to switch from OpenAI or use existing OpenAI SDK integrations.
64 of 71 models on Novita support JSON mode for structured output.
57 of 71 models on Novita support function calling (tool use).
Yes, Novita offers 37 reasoning models: GLM-5, MiniMax-M2.7, Kimi K2.5, Qwen3.5 397B A17B, GLM-4.7, Qwen3.5 27B, MiniMax-M2.5, DeepSeek V3.2, Qwen3.5 122B A10B, Kimi K2 Thinking, MiniMax-M2.1, Gemma 4 31B, Qwen3.5 35B A3B, MiniMax-M2, DeepSeek V3.1 Terminus, gpt-oss-120B (high), DeepSeek V3.2 Exp, GLM-4.6, Gemma 4 26B A4B, GLM-4.7-Flash, Qwen3 235B A22B 2507, DeepSeek V3.1, Qwen3 VL 235B A22B, DeepSeek R1 0528, Qwen3 Next 80B A3B, GLM-4.5, gpt-oss-120B (low), gpt-oss-20B (high), MiniMax M1 80k, GLM-4.6V, gpt-oss-20B (low), Qwen3 VL 30B A3B, DeepSeek R1 (Jan), DeepSeek R1 (Jan), Qwen3 32B, Qwen3 30B, and GLM-4.5V. Reasoning models use extended thinking to work through complex problems before providing an answer.
Yes, 68 of 71 models on Novita are open weight models: GLM-5, Kimi K2.5, Qwen3.5 397B A17B, GLM-4.7, Qwen3.5 27B, MiniMax-M2.5, DeepSeek V3.2, Qwen3.5 122B A10B, Kimi K2 Thinking, GLM-5, Qwen3.5 397B A17B, MiniMax-M2.1, Gemma 4 31B, Kimi K2.5, Qwen3.5 35B A3B, MiniMax-M2, GLM-4.7, DeepSeek V3.1 Terminus, gpt-oss-120B (high), DeepSeek V3.2 Exp, GLM-4.6, DeepSeek V3.2, Gemma 4 26B A4B, Kimi K2 0905, GLM-4.6, GLM-4.7-Flash, Qwen3 235B A22B 2507, DeepSeek V3.1 Terminus, DeepSeek V3.2 Exp, Qwen3 Coder Next, DeepSeek V3.1, DeepSeek V3.1, Qwen3 VL 235B A22B, DeepSeek R1 0528, Qwen3 Next 80B A3B, GLM-4.5, Kimi K2, Qwen3 235B 2507, Qwen3 Coder 480B, gpt-oss-120B (low), gpt-oss-20B (high), MiniMax M1 80k, GLM-4.6V, DeepSeek V3 0324, GLM-4.7-Flash, gpt-oss-20B (low), Qwen3 VL 235B A22B, Qwen3 Next 80B A3B, Qwen3 VL 30B A3B, DeepSeek R1 (Jan), DeepSeek R1 (Jan), Llama 4 Maverick, GLM-4.6V, Qwen3 235B, Qwen3 32B, DeepSeek V3 (Dec), DeepSeek V3 (Dec), Qwen3 VL 30B A3B, Qwen3 30B, GLM-4.5V, ERNIE 4.5 300B A47B, Llama 3.3 70B, Llama 4 Scout, GLM-4.5V, Llama 3.1 8B, Gemma 3 27B, Llama 3.2 1B, and Qwen3 32B.
Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.
When choosing a model on Novita, consider: intelligence (for quality-sensitive tasks), output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and features like context window size, JSON mode, or function calling support.