Liquid AI has launched a newer model, LFM2.5-1.2B-Instruct, we suggest considering this model instead.
For more information, see Comparison of LFM2.5-1.2B-Instruct to other models and API provider benchmarks for LFM2.5-1.2B-Instruct.
LFM2 1.2B API Provider Benchmarking & Analysis
Analysis of API providers for LFM2 1.2B across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include .
Fastest
Output speed
Total 0 providers
Lowest Latency
Time to first token
Total 0 providers
Lowest Price
Blended price (per 1M tokens)
Total 0 providers
No API providers are currently available for LFM2 1.2B.
Benchmarks of providers are not available for this model.
Please see the models page for LFM2 1.2B for details of the model and its intelligence compared to other models.
Update: Default performance benchmarking workload has updated to 10k input tokens to better reflect production use cases. You can still select different workloads above!
Pricing
Pricing: Input and Output Prices: LFM2 1.2B Providers
Speed vs. Price: LFM2 1.2B Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed: LFM2 1.2B Providers
Latency vs. Output Speed: LFM2 1.2B Providers
Latency
Measured by Time (seconds) to First Token
Time to First Token: LFM2 1.2B Providers
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: LFM2 1.2B Providers
API Features
Function (Tool) Calling & JSON Mode: LFM2 1.2B Providers
No comments available
Please check back later or adjust your filters.
Context Window: LFM2 1.2B Providers
Summary Table of Key Comparison Metrics
Frequently Asked Questions
Common questions about LFM2 1.2B providers
LFM2 1.2B is not currently available through any API providers we benchmark. As an open weights model, it can be self-hosted.