Claude Opus 4.7
Active57.2CompetentAnthropic Claude Opus 4.7.
Updated 4 days agoStructured data from Modeldex catalog
VisionAgenticLong context
API release
Apr 16, 2026(this month)
Intelligence Index
57.2/ 100
Competentweighted across 5 benchmarks
- Reasoning
- 87.9
- Long context
- 68.7
- Instruction following
- 46.6
- Medical
- 25.6
Computed as the mean of per-category averages across MMLU, GPQA, SWE-bench, HumanEval, MATH, GSM8K, AIME, Aider Polyglot and more. See each benchmark for methodology.
History
Claude Opus 4.7 became available via the Anthropic API on 2026-04-16.
Training & availability
Anthropic has not released the underlying model weights — access is via their hosted API only.
Capabilities
-
Context window: 1.0M tokens.
-
Max output: 128K tokens.
-
Input modalities: text, image.
Recommended for: vision, agentic, long-context.
Quick start
Minimal example using the anthropic API. Copy, paste, replace the key.
from anthropic import Anthropic
client = Anthropic(api_key="sk-ant-...")
resp = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
messages=[{"role": "user", "content": "Explain quantum computing in one sentence."}],
)
print(resp.content[0].text)Benchmarks
| Benchmark | Score | Source |
|---|---|---|
| GPQA DiamondReasoning | 87.88% accuracy | Third-party llm-stats.com |
| HealthBenchMedical | 25.59% | Third-party llm-stats.com |
| LongBench v2Long context | 53.89% | Third-party llm-stats.com |
Integrations & tooling support
- Tool calling
- Supported
- Structured outputs
- Supported
Price vs quality
Strong benchmark performance
Top-tier benchmarks. Pricing not publicly available — check the provider.
- Quality percentile
- 80.3%
- Effective price
- —
- Pricing breakdown
- — in
— out
vs 5 benchmarks
pricing not available
Community ratings
No ratings yet. Be the first to rate Claude Opus 4.7.
Rate Claude Opus 4.7
Sign in to rate and review.
Comments
Sign in to leave a comment.