Claude
Claude

Claude Haiku 4.5

Model ID:claude-haiku-4-5-20251001
2025-10-15Proprietary Model
Claude FreeClaude ProClaude Max (5x)Claude Max (20x)|API
OverallNo.22
PopularityNo.80

Claude Haiku 4.5 is Anthropic’s fastest and most efficient model, delivering near-frontier intelligence at a fraction of the cost and latency of larger Claude models. Matching Claude Sonnet 4’s performance across reasoning, coding, and computer-use tasks, Haiku 4.5 brings frontier-level capability to real-time and high-volume applications. It introduces extended thinking to the Haiku line; enabling controllable reasoning depth, summarized or interleaved thought output, and tool-assisted workflows with full support for coding, bash, web search, and computer-use tools. Scoring >73% on SWE-bench Verified, Haiku 4.5 ranks among the world’s best coding models while maintaining exceptional responsiveness for sub-agents, parallelized execution, and scaled deployment.

Knowledge Cutoff
2025-07-01

The date this AI finished learning. It may not know about things that happened after this date.

Input → Output Format

The types of content this AI can receive, and what it can produce in return.

Context Memory
200KIN64KOUT

The maximum amount of text the AI can read and process in a single request. A larger number means it can handle longer documents or conversations.

Cost/1M Words
$1IN$5OUT

The cost of using this AI directly in your own application. Shown in USD per 1 million units of text (tokens).

AI Performance Evaluation

Arena Overall Score
1407
±3
As of 2026-04-02
Overall Rank
No.80
57,465 Votes
Arena by Ability
Hard Prompts
1437±4No.64
Expert Knowledge
1443±11No.59
Instruction Following
1411±5No.61
Conversation Memory
1421±7No.63
Creative
1384±7No.70
Coding
1476±6No.44
Math
1393±10No.105
Arena by Occupation
Creative Writing
1394±6No.75
Social Sciences
1422±7No.82
Media
1381±6No.76
Business
1413±6No.70
Healthcare
1417±11No.96
Legal
1405±10No.91
Software
1459±5No.56
Mathematics
1417±12No.73
Reasoning Ability
AA Intelligence Index
37%↓2%
MMLU-Pro
76%↓7%
GPQA Diamond
67%↓15%
HLE
9.7%↓7%
Math
AA Math Index
84%↑9%
AIME 2025
84%↑9%
Coding Ability
AA Coding Index
33%↓4%
LiveCodeBench
62%↓4%
SciCode
43%↑1%
TerminalBench
27%↓7%
Instruction Following
IFBench
54%↓3%
환각률 (HHEM)
9.8%↓1%
사실 일관성 (HHEM)
90%↑1%
Long Context
AA-LCR
70%↑7%
Agentic AI Ability
TAU2
55%↓17%
Speed
Standard Mode
99tok/sec↑21
First Output 0.51s
Artificial Analysis
Reasoning Mode
125tok/sec↑52
First Output 11.81s
Artificial Analysis