Menu

logo
Artificial Analysis
HOME
logo

AI21 Labs: Models Quality, Performance & Price

Analysis of AI21 Labs's models across key metrics including quality, price, output speed, latency, context window & more. This analysis is intended to support you in choosing the best model provided by AI21 Labs for your use-case. For more details including relating to our methodology, see our FAQs. Models analyzed: Jamba 1.5 Large, Jamba 1.5 Mini, and Jamba Instruct.
Link:

AI21 Labs Model Comparison Summary

Quality:Jamba 1.5 Large logo Jamba 1.5 Large and Jamba 1.5 Mini logo Jamba 1.5 Mini are the highest quality models offered by AI21 Labs, followed by Jamba Instruct logo Jamba Instruct.Output Speed (tokens/s):Jamba Instruct logo Jamba Instruct (180 t/s) and Jamba 1.5 Mini logo Jamba 1.5 Mini (180 t/s) are the fastest models offered by AI21 Labs, followed by Jamba 1.5 Large logo Jamba 1.5 Large.Latency (seconds):Jamba Instruct logo Jamba Instruct (0.31s) and  Jamba 1.5 Mini logo Jamba 1.5 Mini (0.32s) are the lowest latency models offered by AI21 Labs, followed by Jamba 1.5 Large logo Jamba 1.5 Large.Blended Price ($/M tokens):Jamba 1.5 Mini logo Jamba 1.5 Mini ($0.25) and Jamba Instruct logo Jamba Instruct ($0.55) are the cheapest models offered by AI21 Labs, followed by Jamba 1.5 Large logo Jamba 1.5 Large.Context Window Size:Jamba 1.5 Large logo Jamba 1.5 Large (256k) and Jamba 1.5 Mini logo Jamba 1.5 Mini (256k) are the largest context window models offered by AI21 Labs, followed by Jamba Instruct logo Jamba Instruct.

Highlights

Quality
Artificial Analysis Quality Index; Higher is better
Speed
Output Tokens per Second; Higher is better
Price
USD per 1M Tokens; Lower is better
Parallel Queries:
Prompt Length:
Features
Model Quality
Price
Output tokens/s
Latency
Further
Analysis
AI21 Labs logo
AI21 Labs logo
Jamba 1.5 Large
256k
64
$3.50
56.2
0.52
AI21 Labs logo
AI21 Labs logo
Jamba 1.5 Mini
256k
46
$0.25
179.5
0.32
AI21 Labs logo
AI21 Labs logo
Jamba Instruct
256k
$0.55
180.1
0.31

Key definitions

Artificial Analysis Quality Index: Average result across our evaluations covering different dimensions of model intelligence. Currently includes MMLU, GPQA, Math & HumanEval. OpenAI o1 model figures are preliminary and are based on figures stated by OpenAI. See methodology for more details.
Context window: Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).
Output Speed: Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).
Latency: Time to first token of tokens received, in seconds, after API request sent. For models which do not support streaming, this represents time to receive the completion.
Price: Price per token, represented as USD per million Tokens. Price is a blend of Input & Output token prices (3:1 ratio).
Output Price: Price per token generated by the model (received from the API), represented as USD per million Tokens.
Input Price: Price per token included in the request/message sent to the API, represented as USD per million Tokens.
Time period: Metrics are 'live' and are based on the past 14 days of measurements, measurements are taken 8 times a day for single requests and 2 times per day for parallel requests.