Mistral 7B
ActiveCompact open-weights model that outperforms Llama 2 13B on many benchmarks.
Overview
Mistral 7B uses grouped-query attention and sliding-window attention to punch well above its weight at 7 billion parameters. Apache-2.0 licensed.
History
Mistral 7B was released on 2023-09-27.
Training & availability
Weights are publicly available under the Apache-2.0 license, making this an open-weight model suitable for on-prem deployment and fine-tuning.
Capabilities
-
Context window: 32K tokens.
-
Max output: 8K tokens.
-
Input modalities: text.
Recommended for: open-source.
Limitations
-
The context window (32K tokens) is modest by 2026 standards — unsuitable for processing long documents in a single request.
-
Text-only — cannot process images, audio, or video inputs.
Quick start
Minimal example using the OpenRouter API. Copy, paste, replace the key.
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="sk-or-...",
)
resp = client.chat.completions.create(
model="mistral/mistral-7b",
messages=[{"role": "user", "content": "Explain quantum computing in one sentence."}],
)
print(resp.choices[0].message.content)Cost calculator
Estimate your monthly bill. Presets are typical workload sizes.
Integrations & tooling support
- Tool calling
- Not supported
- Structured outputs
- Not supported
Price vs quality
Priced low — good for high-volume tasks. Quality tier pending more benchmark coverage.
- Quality percentile
- —
- Effective price
- $0.17/1M
- Pricing breakdown
- $0.11/1M in
$0.19/1M out
Community ratings
Rate Mistral 7B
Sign in to rate and review.
Comments
Sign in to leave a comment.