Parasail: Models Intelligence, Performance & Price
Analysis of Parasail's models across key metrics including quality, price, output speed, latency, context window & more. This analysis is intended to support you in choosing the best model provided by Parasail for your use-case.
Most Intelligent
Intelligence index
Total 21 models
Fastest
Output speed
Total 21 models
Lowest Price
Blended price (per 1M tokens)
Total 21 models
Parasail offers 21 models, each with different intelligence, performance, and pricing characteristics. Below is a comparison of the key metrics across models.
- For intelligence, the top models on Parasail are GLM-5.1 (FP8) (51), GLM-5 (FP8) (50), Kimi K2.5 (47).
- For output speed, the fastest models are Qwen3 Next 80B A3B (129 t/s), Llama 4 Maverick (FP8) (120 t/s), Trinity Large Thinking (FP8) (92 t/s). Speed varies significantly across models, with a 61% difference between the fastest and slowest.
- For latency, MiniMax-M2.5 (FP8) (0.84s), Qwen3 Next 80B A3B (0.93s), Qwen3.5 397B A17B (1.00s) offer the lowest time to first token.
- For pricing, Gemma 4 26B A4B ($0.20), Gemma 4 31B ($0.20), gpt-oss-120B (high) ($0.26) offer the lowest blended prices per 1M tokens.
- For context window size, Llama 4 Maverick (FP8) (1m), Qwen3.5 397B A17B (262k), Gemma 4 31B (262k) support the largest context windows on Parasail.
Highlights
Intelligence Evaluations
Artificial Analysis Intelligence Index
Intelligence Evaluations
Intelligence vs. Price
Context Window
Context Window
JSON Mode & Function Calling
Function (Tool) Calling & JSON Mode
| Models | Function calling | JSON Mode |
|---|---|---|
Pricing
Intelligence vs. Price
Performance Summary
Output Speed vs. Price
Speed
Measured by Output Speed (tokens per second)
Output Speed
Latency
Measured by Time (seconds) to First Token
Time to First Answer Token
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time vs. Price
Key definitions
Frequently Asked Questions
Common questions about Parasail
Parasail offers 21 models that we track: GLM-5.1, GLM-5, Kimi K2.5, Qwen3.5 397B A17B, GLM-5.1, GLM-4.7, MiniMax-M2.5, DeepSeek V3.2, Gemma 4 31B, GLM-4.7, gpt-oss-120B (high), Trinity Large Thinking, Gemma 4 26B A4B, Qwen3 Coder Next, Qwen3 235B 2507, gpt-oss-120B (low), Qwen3 VL 235B A22B, Qwen3 Next 80B A3B, Llama 4 Maverick, Llama 3.3 70B, and Gemma 3 27B.
The most intelligent model available on Parasail is GLM-5.1 with an Intelligence Index score of 51.
The fastest model on Parasail by output speed is Qwen3 Next 80B A3B at 129.2 tokens per second.
The model with the lowest time to first token on Parasail is MiniMax-M2.5 at 0.84s. Lower latency means faster initial response time.
The most affordable model on Parasail by blended price is Gemma 4 26B A4B at $0.20 per 1M tokens (3:1 input to output ratio).
Prices on Parasail vary up to 11x across models, from $0.20 per 1M tokens for Gemma 4 26B A4B to $2.15 per 1M tokens for GLM-5.1.
Yes, Parasail offers an OpenAI-compatible API, making it easy to switch from OpenAI or use existing OpenAI SDK integrations.
17 of 21 models on Parasail support JSON mode for structured output.
19 of 21 models on Parasail support function calling (tool use).
Yes, Parasail offers 12 reasoning models: GLM-5.1, GLM-5, Kimi K2.5, Qwen3.5 397B A17B, GLM-4.7, MiniMax-M2.5, DeepSeek V3.2, Gemma 4 31B, gpt-oss-120B (high), Trinity Large Thinking, Gemma 4 26B A4B, and gpt-oss-120B (low). Reasoning models use extended thinking to work through complex problems before providing an answer.
Yes, all 21 models on Parasail are open weight models.
Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.
When choosing a model on Parasail, consider: intelligence (for quality-sensitive tasks), output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and features like context window size, JSON mode, or function calling support.