Codestral
ActiveMistral's code-specialized model with long context.
Overview
Codestral is Mistral's model trained for code generation, completion, and fill-in-the-middle tasks.
History
Codestral was released on 2024-05-29.
Training & availability
Mistral AI has not released the underlying model weights — access is via their hosted API only.
Capabilities
-
Context window: 32K tokens.
-
Max output: 8K tokens.
-
Input modalities: text.
Limitations
-
The context window (32K tokens) is modest by 2026 standards — unsuitable for processing long documents in a single request.
-
Text-only — cannot process images, audio, or video inputs.
Pricing
- Input: $1.0000 per 1M tokens
- Output: $3.0000 per 1M tokens
Use the cost calculator above to estimate monthly spend for your workload.
Quick start
Minimal example using the OpenRouter API. Copy, paste, replace the key.
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="sk-or-...",
)
resp = client.chat.completions.create(
model="mistral/codestral",
messages=[{"role": "user", "content": "Explain quantum computing in one sentence."}],
)
print(resp.choices[0].message.content)Cost calculator
Estimate your monthly bill. Presets are typical workload sizes.
Benchmarks
| Benchmark | Score | Source |
|---|---|---|
| Aider PolyglotCoding | 11.1% pass@2 | Third-party Papers With Code |
| HumanEvalCoding | 85.4pass@1 % | Self-reported Mistral Codestral announcement |
Integrations & tooling support
- Tool calling
- Not supported
- Structured outputs
- Not supported
Price vs quality
Below-average benchmark performance for the price.
- Quality percentile
- 2.4%
- Effective price
- $2.5/1M
- Pricing breakdown
- $1/1M in
$3/1M out
Community ratings
Rate Codestral
Sign in to rate and review.
Comments
Sign in to leave a comment.