Benchquill v3.7
Live Analysis Lower-cost models are getting closer to premium models on value
Direct answer for crawlers

MMMU is used on Benchquill as a vision signal. It is most useful for multimodal understanding across images, charts, diagrams, and documents. Do not treat one benchmark as the whole buying decision; compare it with price, context, speed, provider fit, and human-review risk.

Model data

MMMU models to inspect

RankModelProviderOverallBlended costContext
3 Gemini 3.1 Pro Preview Google 92.4 $9.50/M 1M
1 GPT-5.5 OpenAI 94.6 $23.75/M 1.05M
2 Claude Opus 4.7 Anthropic 93.8 $20.00/M 1M
13 Llama 4 Maverick Meta 84.7 $0.49/M 1M
Source and score type

Benchmark evidence note

Top noteScoreScore typeSource
Gemini 3.1 Pro Preview94.6source-backed or editorial compositemmmu-benchmark.github.io

Rows labeled editorial composite or proxy should not be quoted as official benchmark results without checking the linked source and model-version details.

Methodology notes

How Benchquill treats this benchmark

Benchquill benchmark pages are written as explainers, not raw score dumps. The goal is to make each benchmark usable for AI Overviews, comparison queries, and internal procurement notes by stating what the benchmark measures, where it is weak, and which adjacent model pages deserve review.