Claude Opus 4.7

💡

A new Opus model with significant gains in coding, high-resolution vision, and instruction following. Claude Code adds /ultrareview and the xhigh effort level.

🔗 Official announcement →

This article is a summary based on official documentation.

Overview

Anthropic released Claude Opus 4.7 on April 16, 2026. Pricing stays the same as Opus 4.6 ($5 / $25 per MTok), while delivering broad improvements across coding, vision, and instruction following. Model ID: claude-opus-4-7. Available on Claude.ai, Claude Code, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.

Key improvements

  • Software engineering — on long-running agent tasks, prior models tended to drift from initial instructions and loosen constraints over time. Opus 4.7 holds strictness and consistency over extended sessions. 10–14% average improvement over Opus 4.6 on major coding evals.

  • High-resolution visual understanding — processes up to 2,576 px (~3.75 megapixels) natively, a 3× resolution jump over prior Opus. Internal visual accuracy eval: 54.5% → 98.5%. Makes chemistry diagrams, circuit schematics, and UI screenshots usable.

  • Literal instruction following — earlier models inferred intent and occasionally abbreviated or skipped steps. Opus 4.7 is tuned to follow instructions literally — good for checklist-heavy workflows with strict formatting or forbidden conditions.

  • Filesystem-based memory — more reliable cross-session workflows where prior-session decisions and notes are left as files and picked up by the next session.

  • Document analysis citations — 21% fewer citation errors. BigLaw Bench: 90.9%. Finance Agent: state of the art.

New capabilities

  • xhigh effort level — a new level between high and max for finer control over reasoning depth vs. latency. Particularly impactful in multi-turn agent setups.

  • Task Budgets (API, public beta) — guides per-task token spend so agent runs don’t drift over budget.

  • /ultrareview in Claude Code — a deeper, stricter review session than /review that reads through changes and flags bugs and design issues.

Benchmarks (vs Opus 4.6)

AreaResult
Coding eval average+10–14%
Visual accuracy54.5% → 98.5%
Document citation errors-21%
BigLaw Bench (legal)90.9%
Finance AgentSOTA

Pricing and availability

Input$5 / MTok (unchanged)
Output$25 / MTok (unchanged)
Model IDclaude-opus-4-7
PlatformsClaude.ai, Claude Code, Claude API, Bedrock, Vertex AI, Foundry

Notes

  • New tokenizer — Opus 4.7 ships with an updated tokenizer; the same text can map to 1.0–1.35× more tokens than 4.6. Re-check token usage estimates and budgets.
  • More thinking at higher effortxhigh and high levels consume more thinking tokens in multi-turn agent setups. Pair with prompt caching (ENABLE_PROMPT_CACHING_1H) to manage cost.
  • Safety profile — Anthropic reports overall safety profile comparable to Opus 4.6, with improvements in honesty and resistance to prompt injection attacks.

Frequently Asked Questions

What's the headline for Opus 4.7?

A new Opus with broad gains in coding, high-resolution vision, and instruction following. Model ID: `claude-opus-4-7`. Pricing matches Opus 4.6 ($5/$25 per MTok). Claude Code gains the `/ultrareview` command and the `xhigh` effort level.

When is it available?

Released on April 16, 2026 across Claude.ai, Claude Code, Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.

Does it affect my existing setup?

The tokenizer was updated, so the same text can map to 1.0–1.35× more tokens than on 4.6. Re-check token usage estimates and budgets.

What are the concrete performance gains?

Roughly 10–14% improvement on major coding evals vs. Opus 4.6; internal vision accuracy 54.5% → 98.5%; 21% fewer citation errors in document analysis. 90.9% on BigLaw Bench and SOTA on the Finance Agent benchmark are reported.

Where is the official announcement?

Announcement: anthropic.com/news/claude-opus-4-7