Mistral: Mistral 7B Instruct v0.3
Mistral • text • function-calling • json-mode
mistralai/mistral-7b-instruct-v0.3A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of [Mistral 7B Instruct v0.2](/models/mistralai/mistral-7b-instruct-v0.2), with the following changes: - Extended vocabulary to 32768 - Supports v3 Tokenizer - Supports function calling NOTE: Support for function calling depends on the provider.
Best For:
High-volume, low-latency tasks where cost efficiency is paramount
Pricing:
$0.00/1M input tokens, $0.00/1M output tokens
Context Window:
32,768 tokens
Key Differentiator:
Cost-optimized for high-volume usage