Llama 3.1 Instruct 405B logo

Open weights model

Released July 2024

Llama 3.1 Instruct 405B API Provider Benchmarking & Analysis

Model Comparison

Analysis of API providers for Llama 3.1 Instruct 405B across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include Amazon Bedrock Latency Optimized, Amazon Bedrock Standard, Microsoft Azure.

Fastest

#1
Amazon StandardAmazon Standard
340.1 t/s
#2
Amazon Latency OptimizedAmazon Latency Optimized
73.7 t/s
#3
AzureAzure
29.4 t/s

Output speed

Total 3 providers

Lowest Latency

#1
AzureAzure
2.01 s
#2
Amazon Latency OptimizedAmazon Latency Optimized
2.46 s
#3
Amazon StandardAmazon Standard
76.47 s

Time to first answer token

Total 3 providers

Lowest Price

#1
Amazon StandardAmazon Standard
$2.40
#2
Amazon Latency OptimizedAmazon Latency Optimized
$3.00
#3
AzureAzure
$8.00

Blended price (per 1M tokens, 3:1 Input-Output ratio)

Total 3 providers

Llama 3.1 405B is available through 3 API providers, each offering different performance characteristics and pricing. Below is a comparison of the key metrics across providers.

  • For output speed, the top providers are Amazon Standard (340.1 t/s), Amazon Latency Optimized (73.7 t/s), Azure (29.4 t/s).
  • For latency, Azure (2.01s), Amazon Latency Optimized (2.46s), Amazon Standard (76.47s) offer the lowest time to first token.
  • For pricing, Amazon Standard (2.40), Amazon Latency Optimized (3.00), Azure (8.00) offer the lowest blended prices per 1M tokens.
  • Amazon Standard provides an excellent balance of speed and cost-effectiveness. For the lowest latency, Azure is the best choice.

Highlights

Output tokens per second · Higher is better
Seconds · Lower is better
USD per 1M tokens (3:1 input-output ratio) · Lower is better

Update: Default performance benchmarking workload has updated to 10k input tokens to better reflect production use cases. You can still select different workloads above.

Pricing

Pricing: Cache Hit, Input, and Output

USD per 1M tokens · Lower is better · 10,000 Input Tokens

Price per token for cached prompts (previously processed), typically offering a significant discount compared to regular input price, represented as USD per million tokens. The values shown here are the cache hit price; cache write and cache storage are billed separately and vary by provider — see "Cache pricing by provider" for detail.

Price per token included in the request/message sent to the API, represented as USD per million Tokens.

The blended bar shown here uses cache hit price only. Other caching costs differ by provider:

  • Anthropic: charges a separate cache write fee, with different rates for 5-minute and 1-hour TTLs (1-hour TTL is more expensive). Blended price charts use Anthropic cache write price for the input leg.
  • Google (Vertex/Gemini): charges a per-hour cache storage fee in addition to cache hit pricing. Some providers also use tiered pricing for prompts above 200K tokens.
  • OpenAI, DeepSeek, others: typically charge only cache hit pricing with no write or storage fee.

See Prompt Caching for the full breakdown.

Price per token generated by the model (received from the API), represented as USD per million Tokens.

Pricing: Blended Price

Blended at 3:1 (Input : Output) · USD per 1M tokens · Lower is better

Price per token, represented as USD per million Tokens. Price is a blend of cache hit, input, and output token prices using the selected ratio.

Price per token for cached prompts (previously processed), typically offering a significant discount compared to regular input price, represented as USD per million tokens. The values shown here are the cache hit price; cache write and cache storage are billed separately and vary by provider — see "Cache pricing by provider" for detail.

The blended bar shown here uses cache hit price only. Other caching costs differ by provider:

  • Anthropic: charges a separate cache write fee, with different rates for 5-minute and 1-hour TTLs (1-hour TTL is more expensive). Blended price charts use Anthropic cache write price for the input leg.
  • Google (Vertex/Gemini): charges a per-hour cache storage fee in addition to cache hit pricing. Some providers also use tiered pricing for prompts above 200K tokens.
  • OpenAI, DeepSeek, others: typically charge only cache hit pricing with no write or storage fee.

See Prompt Caching for the full breakdown.

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

Speed vs. Price

Blended at 3:1 (Input : Output) · Output speed: output tokens per second · Price: USD per 1M tokens
Most attractive quadrant
Amazon Latency Optimized
Amazon Standard
Azure

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Price per token, represented as USD per million Tokens. Price is a blend of cache hit, input, and output token prices using the selected ratio.

Price per token for cached prompts (previously processed), typically offering a significant discount compared to regular input price, represented as USD per million tokens. The values shown here are the cache hit price; cache write and cache storage are billed separately and vary by provider — see "Cache pricing by provider" for detail.

The blended bar shown here uses cache hit price only. Other caching costs differ by provider:

  • Anthropic: charges a separate cache write fee, with different rates for 5-minute and 1-hour TTLs (1-hour TTL is more expensive). Blended price charts use Anthropic cache write price for the input leg.
  • Google (Vertex/Gemini): charges a per-hour cache storage fee in addition to cache hit pricing. Some providers also use tiered pricing for prompts above 200K tokens.
  • OpenAI, DeepSeek, others: typically charge only cache hit pricing with no write or storage fee.

See Prompt Caching for the full breakdown.

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

Speed

Measured by Output Speed (tokens per second)

Output Speed

Output tokens per second · Higher is better · 10,000 Input Tokens

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

Latency vs. Output Speed

Latency: seconds to first token received · Output speed: output tokens per second · 10,000 Input Tokens
Most attractive quadrant
Size represents Price (USD per M Tokens)
Amazon Latency Optimized
Amazon Standard
Azure

Tokens per second received while the model is generating tokens (ie. after first chunk has been received from the API for models which support streaming).

Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.

Price per token, represented as USD per million Tokens. Price is a blend of cache hit, input, and output token prices using the selected ratio.

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

Latency

Measured by Time (seconds) to First Token

Time to First Token

Seconds to first token received · Lower is better · 10,000 Input Tokens

Time to first token received, in seconds, after API request sent. For reasoning models which share reasoning tokens, this will be the first reasoning token. For models which do not support streaming, this represents time to receive the completion.

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

End-to-End Response Time

Seconds to output 500 tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed

End-to-End Response Time

Seconds to output 500 tokens, including reasoning model 'thinking' time · Lower is better · 10,000 Input Tokens

Seconds to receive a 500 token response. Key components:

  • Input time: Time to receive the first response token
  • Thinking time (only for reasoning models): Time reasoning models spend outputting tokens to reason prior to providing an answer. Amount of tokens based on the average reasoning tokens across a diverse set of 60 prompts (methodology details).
  • Answer time: Time to generate 500 output tokens, based on output speed

For fair comparison, the number of reasoning tokens is standardized across all providers for each model based on the model's representative query token counts.

Figures represent median (P50) measurement over the past 72 hours to reflect sustained changes in performance.

API Features

Function (Tool) Calling & JSON Mode

ModelsFunction callingJSON Mode
Amazon Latency Optimized logoAmazon Latency Optimized
Amazon Standard logoAmazon Standard
Azure logoAzure

Indicates whether the provider supports function calling in their API. Function calling is also known as 'Tool Calling'.

Indicates whether the provider supports JSON mode in their API. When JSON mode is enabled, the models will always return a valid JSON object.

Context Window

Context window: tokens limit · Higher is better

Maximum number of combined input & output tokens. Output tokens commonly have a significantly lower limit (varied by model).

While models have their own context window, in cases this is limited by providers.

Summary Table of Key Comparison Metrics

Frequently Asked Questions

Common questions about Llama 3.1 Instruct 405B providers

Llama 3.1 Instruct 405B is available through 3 API providers: Amazon Latency Optimized, Amazon Standard, and Azure. Each provider offers different performance characteristics and pricing.

Llama 3.1 Instruct 405B is currently available through 3 API providers that we benchmark and track.

The fastest providers for Llama 3.1 Instruct 405B by output speed are Amazon Standard (340.1 t/s), Amazon Latency Optimized (73.7 t/s), and Azure (29.4 t/s). Output speed measures how quickly tokens are generated after the model starts responding.

The providers with the lowest time to first token for Llama 3.1 Instruct 405B are Azure (2.01s), Amazon Latency Optimized (2.46s), and Amazon Standard (76.47s). Lower latency means faster initial response time.

The most affordable providers for Llama 3.1 Instruct 405B by blended price are Amazon Standard ($2.40 per 1M tokens), Amazon Latency Optimized ($3.00 per 1M tokens), and Azure ($8.00 per 1M tokens). Blended price uses a 3:1 input to output token ratio.

The providers with the lowest input token pricing for Llama 3.1 Instruct 405B are Amazon Standard ($2.40 per 1M input tokens), Amazon Latency Optimized ($3.00 per 1M input tokens), and Azure ($5.33 per 1M input tokens).

The providers with the lowest output token pricing for Llama 3.1 Instruct 405B are Amazon Standard ($2.40 per 1M output tokens), Amazon Latency Optimized ($3.00 per 1M output tokens), and Azure ($16.00 per 1M output tokens).

Prices for Llama 3.1 Instruct 405B vary up to 3.3x across providers. The most affordable is Amazon Standard at $2.40 per 1M tokens, while Azure charges $8.00 per 1M tokens.

Output speed for Llama 3.1 Instruct 405B varies significantly across providers. Amazon Standard is the fastest at 340.1 t/s, which is 11.6x faster than Azure at 29.4 t/s.

1 of 3 providers support JSON mode for Llama 3.1 Instruct 405B: Azure.

2 of 3 providers support function calling for Llama 3.1 Instruct 405B: Amazon Latency Optimized and Amazon Standard.

The best provider for Llama 3.1 Instruct 405B depends on your priorities: Amazon Standard offers the highest output speed, Azure has the lowest latency, and Amazon Standard provides the most competitive pricing.

When choosing a provider for Llama 3.1 Instruct 405B, consider: output speed (for throughput-intensive tasks), latency (for interactive applications requiring quick first responses), pricing (for cost-sensitive workloads), and API features like JSON mode or function calling.

Yes, provider performance can vary over time due to infrastructure changes, load balancing, and updates. We continuously benchmark all providers and display historical performance trends in the "Over Time" charts.

For information about Llama 3.1 Instruct 405B's intelligence, capabilities, modalities, and how it compares to other models, see the model overview page. View model overview