GPT-5.4 nano
Active43.3BasicUltra-fast nano variant of GPT-5.4.
Intelligence Index
43.3/ 100
Basicweighted across 7 benchmarks
- Factual grounding
- 90.6
- Reasoning
- 60.6
- Medical
- 27.5
- Long context
- 19.0
- Instruction following
- 18.8
Computed as the mean of per-category averages across MMLU, GPQA, SWE-bench, HumanEval, MATH, GSM8K, AIME, Aider Polyglot and more. See each for methodology.
History
GPT-5.4 nano became available via the OpenAI API on 2026-03-17.
Training & availability
Training data has a knowledge cutoff of 2025-08-31 — information about events after that date is unlikely to appear in the model's responses. OpenAI has not released the underlying model weights — access is via their hosted API only.
Capabilities
-
Context window: 272K tokens.
-
Max output: 128K tokens.
-
Input modalities: text, image.
Recommended for: vision, agentic, long-context.
Quick start
Minimal example using the openai API. Copy, paste, replace the key.
from openai import OpenAI
client = OpenAI(api_key="sk-...")
resp = client.chat.completions.create(
model="gpt-5-4-nano",
messages=[{"role": "user", "content": "Explain quantum computing in one sentence."}],
)
print(resp.choices[0].message.content)Benchmarks
| Benchmark | Score | Source |
|---|---|---|
| FACTS GroundingFactual grounding | 90.58% | Third-party llm-stats.com |
| GPQA DiamondReasoning | 60.61% accuracy | Third-party llm-stats.com |
| HealthBenchMedical | 27.53% | Third-party llm-stats.com |
Integrations & tooling support
- Tool calling
- Supported
- Structured outputs
- Supported
Price vs quality
Performance trails frontier models. Pricing not publicly available — check the provider.
- Quality percentile
- 38.8%
- Effective price
- —
- Pricing breakdown
- — in
— out
Community ratings
Rate GPT-5.4 nano
Sign in to rate and review.
Comments
Sign in to leave a comment.