Google: Gemini 2.5 Flash Lite Preview 06-17
Google • text • vision • audio • function-calling • json-mode
google/gemini-2.5-flash-lite-preview-06-17Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.
Best For:
High-volume, low-latency tasks where cost efficiency is paramount
Pricing:
$0.00/1M input tokens, $0.00/1M output tokens
Context Window:
1,048,576 tokens (Large - suitable for extensive codebases)
Key Differentiator:
Cost-optimized for high-volume usage