LFM2 8B A1B Intelligence, Performance & Price Analysis
Model summary
Intelligence
Artificial Analysis Intelligence Index
Speed
Output tokens per second
Input Price
USD per 1M tokens
Output Price
USD per 1M tokens
Verbosity
Output tokens from Intelligence Index
Metrics are compared against models of the same class:
- Non-reasoning models → compared only with other non-reasoning models
- Reasoning models → compared across both reasoning and non-reasoning
- Open weights models → compared only with other open weights models of the same size class:
- Tiny: ≤4B parameters
- Small: 4B–40B parameters
- Medium: 40B–150B parameters
- Large: >150B parameters
- Proprietary models → compared across proprietary and open weights models of the same price range, using a blended 3:1 input/output price ratio:
- <$0.15 per 1M tokens
- $0.15–$1 per 1M tokens
- >$1 per 1M tokens
| Reasoning | No This page shows the non-reasoning version of this model. A reasoning variant may also exist. |
|---|---|
| Input modality | Supports: text |
| Output modality | Supports: text |
| Context window | 33k ~49 A4 pages of size 12 Arial font |
| Total parameters | 8.3B |
| Active parameters | 1.5B Number of parameters active per token during inference |
| License | lfm 1.0 |
| Model weights | Hugging Face |
LFM2 8B A1B is among the least intelligent models, but well priced when comparing to other open weight non-reasoning models of similar size. The model supports text input, outputs text, and has a 33k tokens context window.
LFM2 8B A1B scores 7 on the Artificial Analysis Intelligence Index, placing it at the lower end among comparable models (averaging 12). When evaluating the Intelligence Index, it generated 7.8M tokens, which is somewhat verbose in comparison to the average of 5.3M.
Pricing for LFM2 8B A1B is $0.00 per 1M input tokens (competitively priced, average: $0.10) and $0.00 per 1M output tokens (competitively priced, average: $0.20). In total, it cost $0.00 to evaluate LFM2 8B A1B on the Intelligence Index.
Intelligence
Artificial Analysis Intelligence Index
Artificial Analysis Intelligence Index by Open Weights / Proprietary
Intelligence Evaluations
Openness
Artificial Analysis Openness Index: Results
Intelligence Index Comparisons
Intelligence vs. Price
Intelligence Index Token Use & Cost
Output Tokens Used to Run Artificial Analysis Intelligence Index
Cost to Run Artificial Analysis Intelligence Index
Context Window
Context Window
Pricing
Pricing: Input and Output Prices
Intelligence vs. Price (Log Scale)
Pricing Comparison of LFM2 8B A1B API Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed
Output Speed vs. Price
Latency
Measured by Time (seconds) to First Token
Latency: Time To First Answer Token
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time
Model Size (Open Weights Models Only)
Model Size: Total and Active Parameters
Frequently Asked Questions
Common questions about LFM2 8B A1B
LFM2 8B A1B was released on October 7, 2025.
LFM2 8B A1B was created by Liquid AI.
LFM2 8B A1B scores 7 on the Artificial Analysis Intelligence Index, placing it at the lower end among other open weight non-reasoning models of similar size (median: 12).
When evaluated on the Intelligence Index, LFM2 8B A1B generated 7.8M output tokens, which is somewhat higher than average compared to other open weight non-reasoning models of similar size (median: 5.3M).
No, LFM2 8B A1B is not a reasoning model. It provides direct responses without extended chain-of-thought reasoning.
LFM2 8B A1B supports text input.
LFM2 8B A1B supports text output.
No, LFM2 8B A1B does not support image input. It can only process text.
No, LFM2 8B A1B is not multimodal. It only supports text input.
LFM2 8B A1B has a context window of 33k tokens. This determines how much text and conversation history the model can process in a single request.
Yes, LFM2 8B A1B is open weights. The model weights are publicly available and can be downloaded for self-hosting.
LFM2 8B A1B has 8.34 billion parameters (1.5 billion active).
LFM2 8B A1B is a Mixture of Experts (MoE) model with 8.34 billion total parameters, but only 1.5 billion active parameters are used during inference.
LFM2 8B A1B is released under the lfm 1.0 license. This license allows commercial use. View license
LFM2 8B A1B achieves a score of 7 on the Artificial Analysis Intelligence Index. This composite benchmark evaluates models across reasoning, knowledge, mathematics, and coding.
Yes, LFM2 8B A1B is available via API through 1 provider. Compare API providers
LFM2 8B A1B is available through 1 API provider. Compare providers