Phi-3.5 Mini
Active3.8B instruction-following model targeting mobile and edge deployment.
Overview
Phi-3.5 Mini is Microsoft's lightweight 3.8B model optimized for edge and on-device inference with strong instruction following and multilingual support.
History
Phi-3.5 Mini was released on 2024-08-20.
Training & availability
Weights are publicly available under the MIT license, making this an open-weight model suitable for on-prem deployment and fine-tuning.
Capabilities
-
Context window: 128K tokens.
-
Max output: 4K tokens.
-
Input modalities: text.
Recommended for: open-source.
Limitations
- Text-only — cannot process images, audio, or video inputs.
Quick start
Minimal example using the OpenRouter API. Copy, paste, replace the key.
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="sk-or-...",
)
resp = client.chat.completions.create(
model="microsoft/phi-3-5-mini",
messages=[{"role": "user", "content": "Explain quantum computing in one sentence."}],
)
print(resp.choices[0].message.content)Cost calculator
Estimate your monthly bill. Presets are typical workload sizes.
Integrations & tooling support
- Tool calling
- Not supported
- Structured outputs
- Not supported
Price vs quality
Priced low — good for high-volume tasks. Quality tier pending more benchmark coverage.
- Quality percentile
- —
- Effective price
- $0.422/1M
- Pricing breakdown
- $0.13/1M in
$0.52/1M out
Community ratings
Rate Phi-3.5 Mini
Sign in to rate and review.
Comments
Sign in to leave a comment.