Large Language Models

Recovered in Translation: Efficient Pipeline for Automated Translation of Benchmarks and Datasets

HHanna YukhymenkoAAnton AlexandrovMMartin Vechev
Published
February 25, 2026
Authors
3
Word Count
11,032

Automated framework ensures high-quality multilingual benchmark translations using advanced test-time scaling methods.

Abstract

The reliability of multilingual Large Language Model (LLM) evaluation is currently compromised by the inconsistent quality of translated benchmarks. Existing resources often suffer from semantic drift and context loss, which can lead to misleading performance metrics. In this work, we present a fully automated framework designed to address these challenges by enabling scalable, high-quality translation of datasets and benchmarks. We demonstrate that adapting test-time compute scaling strategies, specifically Universal Self-Improvement (USI) and our proposed multi-round ranking method, T-RANK, allows for significantly higher quality outputs compared to traditional pipelines. Our framework ensures that benchmarks preserve their original task structure and linguistic nuances during localization. We apply this approach to translate popular benchmarks and datasets into eight Eastern and Southern European languages (Ukrainian, Bulgarian, Slovak, Romanian, Lithuanian, Estonian, Turkish, Greek). Evaluations using both reference-based metrics and LLM-as-a-judge show that our translations surpass existing resources, resulting in more accurate downstream model assessment. We release both the framework and the improved benchmarks to facilitate robust and reproducible multilingual AI development.

Key Takeaways

  • 1

    Existing multilingual benchmarks suffer from translation quality issues that compromise LLM evaluation reliability across languages.

  • 2

    Test-time compute scaling methods like USI and T-RANK significantly improve machine translation quality for benchmarks.

  • 3

    Eastern European languages with complex grammar require context-aware translation to prevent grammatical leakage that reveals answers.

Limitations

  • Framework focuses only on eight Eastern and Southern European languages, limiting generalizability to other language families.

  • Evaluation relies on LLM-as-a-judge scoring which may introduce biases inherent to the judging model itself.

Keywords

Large Language Modelmultilingual evaluationbenchmark translationsemantic driftcontext losstest-time compute scalingUniversal Self-ImprovementT-RANKlinguistic nuancesreference-based metricsLLM-as-a-judge

More in Large Language Models

View all
Recovered in Translation: Efficient Pipeline for Automated Translation of Benchmarks and Datasets | Paperchime