LFM2.5-1.2B-Instruct API Provider Benchmarking & Analysis
Analysis of API providers for LFM2.5-1.2B-Instruct across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers benchmarked include .
Fastest
Output speed
Total 0 providers
Lowest Latency
Time to first token
Total 0 providers
Lowest Price
Blended price (per 1M tokens)
Total 0 providers
No API providers are currently available for LFM2.5-1.2B-Instruct.
Benchmarks of providers are not available for this model.
Please see the models page for LFM2.5-1.2B-Instruct for details of the model and its intelligence compared to other models.
Update: Default performance benchmarking workload has updated to 10k input tokens to better reflect production use cases. You can still select different workloads above!
Pricing
Pricing: Input and Output Prices: LFM2.5-1.2B-Instruct Providers
Speed vs. Price: LFM2.5-1.2B-Instruct Providers
Speed
Measured by Output Speed (tokens per second)
Output Speed: LFM2.5-1.2B-Instruct Providers
Latency vs. Output Speed: LFM2.5-1.2B-Instruct Providers
Latency
Measured by Time (seconds) to First Token
Time to First Token: LFM2.5-1.2B-Instruct Providers
End-to-End Response Time
Seconds to output 500 Tokens, calculated based on time to first token, 'thinking' time for reasoning models, and output speed
End-to-End Response Time: LFM2.5-1.2B-Instruct Providers
API Features
Function (Tool) Calling & JSON Mode: LFM2.5-1.2B-Instruct Providers
No comments available
Please check back later or adjust your filters.
Context Window: LFM2.5-1.2B-Instruct Providers
Summary Table of Key Comparison Metrics
Frequently Asked Questions
Common questions about LFM2.5-1.2B-Instruct providers
LFM2.5-1.2B-Instruct is not currently available through any API providers we benchmark. As an open weights model, it can be self-hosted.