Papers
arxiv:2510.20780

Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost

Published on Oct 23
· Submitted by Runzhe Zhan on Oct 27
Authors:
,
,
,
,

Abstract

Calibrating large reasoning models with synthetic human-like thinking trajectories improves their efficiency and performance in machine translation evaluation.

AI-generated summary

Recent advancements in large reasoning models (LRMs) have introduced an intermediate "thinking" process prior to generating final answers, improving their reasoning capabilities on complex downstream tasks. However, the potential of LRMs as evaluators for machine translation (MT) quality remains underexplored. We provides the first systematic analysis of LRM-as-a-judge in MT evaluation. We identify key challenges, revealing LRMs require tailored evaluation materials, tend to "overthink" simpler instances and have issues with scoring mechanisms leading to overestimation. To address these, we propose to calibrate LRM thinking by training them on synthetic, human-like thinking trajectories. Our experiments on WMT24 Metrics benchmarks demonstrate that this approach largely reduces thinking budgets by ~35x while concurrently improving evaluation performance across different LRM scales from 7B to 32B (e.g., R1-Distill-Qwen-7B achieves a +8.7 correlation point improvement). These findings highlight the potential of efficiently calibrated LRMs to advance fine-grained automatic MT evaluation.

Community

Paper author Paper submitter
edited 1 day ago

Evaluating machine translation (MT) quality is a complex task that extends beyond simple string matching. Large Reasoning Models (LRMs) are capable of modeling intricate reasoning processes, yet their role in MT evaluation remains insufficiently understood. In this work, we present a systematic investigation into the use of LRMs as evaluators for MT quality, specifically exploring their ability to replicate the Multidimensional Quality Metrics (MQM) assessment process. Our analysis across various LRMs reveals that evaluation materials must be carefully tailored, as these models tend to overanalyze simple cases and exhibit overestimation biases. To address these challenges, we introduce a simple yet effective method for calibrating LRM reasoning by training them on synthetic, human-like MQM evaluation trajectories. This work underscore the potential of efficiently calibrated LRMs to advance fine-grained, automatic MT evaluation.

Model Collection: https://huggingface.co/collections/rzzhan/thinmqm
Github: https://github.com/NLP2CT/ThinMQM

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.20780 in a Space README.md to link it from this page.

Collections including this paper 1