Large Language Models

AACR-Bench: Evaluating Automatic Code Review with Holistic Repository-Level Context

LLei ZhangYYongda YuMMinghui YuXXinxin GuoZZhengqi ZhuangGGuoping RongDDong ShaoHHaifeng ShenHHongyu KuangZZhengfeng LiBBoge WangGGuoan ZhangBBangyu XiangXXiaobin Xu
Published
January 27, 2026
Authors
14
Word Count
10,598
Code
Includes code

AACR-Bench revolutionizes automated code review evaluation.

Abstract

High-quality evaluation benchmarks are pivotal for deploying Large Language Models (LLMs) in Automated Code Review (ACR). However, existing benchmarks suffer from two critical limitations: first, the lack of multi-language support in repository-level contexts, which restricts the generalizability of evaluation results; second, the reliance on noisy, incomplete ground truth derived from raw Pull Request (PR) comments, which constrains the scope of issue detection. To address these challenges, we introduce AACR-Bench a comprehensive benchmark that provides full cross-file context across multiple programming languages. Unlike traditional datasets, AACR-Bench employs an "AI-assisted, Expert-verified" annotation pipeline to uncover latent defects often overlooked in original PRs, resulting in a 285% increase in defect coverage. Extensive evaluations of mainstream LLMs on AACR-Bench reveal that previous assessments may have either misjudged or only partially captured model capabilities due to data limitations. Our work establishes a more rigorous standard for ACR evaluation and offers new insights on LLM based ACR, i.e., the granularity/level of context and the choice of retrieval methods significantly impact ACR performance, and this influence varies depending on the LLM, programming language, and the LLM usage paradigm e.g., whether an Agent architecture is employed. The code, data, and other artifacts of our evaluation set are available at https://github.com/alibaba/aacr-bench .

Key Takeaways

  • 1

    AACR-Bench provides comprehensive, multilingual repository-level context.

  • 2

    Uses AI-assisted, human expert-verified annotation for higher issue coverage.

  • 3

    Reveals significant performance variations across different programming languages.

Limitations

  • Challenges in constructing a fully comprehensive Ground Truth.

  • High computational requirements may limit accessibility.

Keywords

Large Language ModelsAutomated Code Reviewevaluation benchmarkscross-file contextAI-assisted annotationexpert-verifieddefect coverageLLM evaluationcontext granularityretrieval methodsAgent architecture

More in Large Language Models

View all
AACR-Bench: Evaluating Automatic Code Review with Holistic Repository-Level Context | Paperchime