Anthropic Launches Opus 4.5, Boosting Coding Performance and Conversation Length
Anthropic has released Opus 4.5, its newest frontier model, delivering improvements in coding capabilities and enhanced user experience. A key update addresses lengthy conversations, preventing the abrupt stops previously experienced by users when hitting the 200,000-token context window limit.
Previously, Claude would terminate conversations reaching this limit rather than risk generating incoherent responses due to information loss. Now,the model will summarize earlier conversation points,retaining crucial information while discarding less relevant details. This functionality extends to all current Claude models within the company’s apps and is also available to developers via context management and compaction tools in the Anthropic API.
Opus 4.5 achieves an 80.9 percent accuracy score on the SWE-Bench Verified benchmark,surpassing OpenAI’s GPT-5.1-Codex-Max (77.9 percent) and Google’s Gemini 3 Pro (76.2 percent). The model excels in agentic coding and tool use, though it currently trails GPT-5.1 in visual reasoning tasks (MMMU).