Codestral 2508
ActiveSpecialized code generation model.
History
Codestral 2508 became available via the Mistral AI API on 2025-08-01.
Training & availability
Training data has a knowledge cutoff of 2025-03-31 — information about events after that date is unlikely to appear in the model's responses. Mistral AI has not released the underlying model weights — access is via their hosted API only.
Capabilities
-
Context window: 256K tokens.
-
Max output: 256K tokens.
-
Input modalities: text.
Recommended for: agentic, long-context.
Limitations
-
The knowledge cutoff is 12 months old — this model will not know about recent events, releases, or API changes.
-
Text-only — cannot process images, audio, or video inputs.
Quick start
Minimal example using the OpenRouter API. Copy, paste, replace the key.
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="sk-or-...",
)
resp = client.chat.completions.create(
model="mistral/codestral-2508",
messages=[{"role": "user", "content": "Explain quantum computing in one sentence."}],
)
print(resp.choices[0].message.content)Cost calculator
Estimate your monthly bill. Presets are typical workload sizes.
Providers & performance
1 providerMulti-provider inference routes for this model — sorted by throughput. Latency is time-to-first-token; throughput is output tokens per second. Data from OpenRouter, measured over the last 30 minutes.
| Provider | Throughput | Latency (TTFT) | Input $ / 1M | Output $ / 1M | Context | Quant | Supports |
|---|---|---|---|---|---|---|---|
| Mistral | 20tok/s | 139ms | $0.3 | $0.9 | 256K | — | tools · json |
Integrations & tooling support
- Tool calling
- Supported
- Structured outputs
- Supported
Price vs quality
Priced low — good for high-volume tasks. Quality tier pending more benchmark coverage.
- Quality percentile
- —
- Effective price
- $0.75/1M
- Pricing breakdown
- $0.3/1M in
$0.9/1M out
Community ratings
Rate Codestral 2508
Sign in to rate and review.
Comments
Sign in to leave a comment.