A nimble French model that fits on a single modern GPU but still codes and writes decently.
Price per 1 million tokens
per 1M tokens you send
per 1M tokens you receive
Input tokens are what you send to the AI, output tokens are what the AI sends back. These rates are set by the provider and reflect the current Mistral AI Mistral 3.1 Small API pricing. Price accurate as of June 2025.
per month
per month
per month
128-token-head rotary-attention model (7-B dense + MoE adapters) aimed at on-device reasoning, fully RAG-ready out of the box.
The context window is the maximum amount of text the model can "see" at once. Larger windows allow for longer conversations or documents.
Join thousands of users who have switched to API.chat and are saving on their AI expenses while enjoying a better experience.