AI Hardware Benchmarking & Performance Analysis
We measure real-world performance of AI accelerator systems during language model inference.
For language model intelligence benchmarks, or API performance benchmarks, see language model comparisons.
AA-AgentPerf: The Hardware Benchmark for the Agent Era
- Real agent workloads, not synthetic queries: we've captured real coding agent trajectories where our agents used up to 200 turns and worked with sequence lengths >100K tokens
- Production optimizations allowed: KV cache reuse, disaggregated prefill/decode, speculative decoding - we're allowing the optimizations that labs and inference providers are serving in production so that we can capture what real deployments should look like
- Measures what developers need to know: max concurrent users at each target output speed, expressed per accelerator, per kW, per $/hr, and per rack
- Built for every kind of scale: designed to measure systems from a single accelerator up to a full rack, and to fairly evaluate every architecture from DRAM-only designs to SRAM-only designs and everything in between
- Live now: AA-AgentPerf is now open for submissions of configurations for benchmarking. The models supported at launch are gpt-oss-120b and DeepSeek V3.2. We'll be publishing results on a rolling basis.
AA-AgentPerf has been shaped by our work with inference providers and engagement with AI accelerator companies, developers, and enterprise buyers over the past year. Our goal is for anyone deploying models - whether buying or leasing accelerators - to be able to use AA-AgentPerf as the definitive resource for understanding real-world hardware performance. Read the full methodology
More concurrent agents means higher total throughput but slower per-user speed — AgentPerf measures exactly where each system hits this trade-off.
Max Concurrent Users per System
First results coming soon!
Concurrent Users vs. Output Speed
First results coming soon!
How AA-AgentPerf measures maximum user count
| Phase | Users | P25 Speed | Result |
|---|
AgentPerf uses a binary search to find the maximum number of concurrent users each system can sustain while meeting output speed and time-to-first-token performance targets.
System Load Test (AA-SLT)
Our original hardware benchmark, covering a wide range of systems. Read the methodology
Highlights
Throughput
System Output Throughput at 100 tokens/s Per Query Output Speed
Output Speed
Peak Output Speed per Query
Throughput vs Speed
System Output Throughput vs. Output Speed per Query
System Output Throughput & Output Speed per Query vs. Concurrency
Cost
Cost per Million Input and Output Tokens at 100 tokens/s Per Query Output Speed
Concurrency
End-to-End Latency vs. Concurrency
Pricing
Price per GPU Hour (On-Demand)
Frequently Asked Questions
For the current Artificial Analysis System Load Test (AA-SLT), NVIDIA's B200 is the most performant accelerator for LLM inference. It leads on peak throughput and output speed per query, though the right choice can still vary by model, deployment goal and budget.
NVIDIA's B200 currently powers the highest-throughput result in the current Artificial Analysis System Load Test (AA-SLT). The top benchmark is 8xB200 (SXM) serving gpt-oss-120B (high), reaching 92,909 output tokens per second at peak throughput.
NVIDIA's B200 currently powers the fastest single-query result in the current Artificial Analysis System Load Test (AA-SLT). The top benchmark is 8xB200 (SXM) serving gpt-oss-120B (high), reaching 403 output tokens per second per query.
8xB200 (SXM) for gpt-oss-120B (high) currently has the best cost efficiency in the current Artificial Analysis System Load Test (AA-SLT) at $0.19. Artificial Analysis compares systems using cost per one million input and one million output tokens at a model-specific reference speed, so the most cost-efficient hardware depends on both the model and the target output speed.
In the current Artificial Analysis System Load Test (AA-SLT), DeepSeek R1 0528 (May '25) works best on 8xB200 (SXM) with NVIDIA's B200, reaching 45,677 output tokens per second at peak throughput, Llama 4 Maverick works best on 8xB200 (SXM) with NVIDIA's B200, reaching 48,198 output tokens per second at peak throughput, and gpt-oss-120B (high) works best on 8xB200 (SXM) with NVIDIA's B200, reaching 92,909 output tokens per second at peak throughput.