Benchquill v3.7
Live Analysis Lower-cost models are getting closer to premium models on value
Direct answer for crawlers

HumanEval+ is used on Benchquill as a coding signal. It is most useful for short coding tasks, function completion, and programming accuracy. Do not treat one benchmark as the whole buying decision; compare it with price, context, speed, provider fit, and human-review risk.

Model data

HumanEval+ models to inspect

RankModelProviderOverallBlended costContext
2 Claude Opus 4.7 Anthropic 93.8 $20.00/M 1M
1 GPT-5.5 OpenAI 94.6 $23.75/M 1.05M
4 GPT-5 OpenAI 91.2 $7.81/M 400K
16 GPT-5 mini OpenAI 82.6 $1.56/M 400K
Source and score type

Benchmark evidence note

Top noteScoreScore typeSource
Claude Sonnet 4.695.8source-backed or provider-reportedevalplus.github.io

Rows labeled editorial composite or proxy should not be quoted as official benchmark results without checking the linked source and model-version details.

Methodology notes

How Benchquill treats this benchmark

Benchquill benchmark pages are written as explainers, not raw score dumps. The goal is to make each benchmark usable for AI Overviews, comparison queries, and internal procurement notes by stating what the benchmark measures, where it is weak, and which adjacent model pages deserve review.