Extending Automatic Machine Translation Evaluation to Book-Length Documents
- URL: http://arxiv.org/abs/2509.17249v1
- Date: Sun, 21 Sep 2025 21:46:58 GMT
- Title: Extending Automatic Machine Translation Evaluation to Book-Length Documents
- Authors: Kuang-Da Wang, Shuoyang Ding, Chao-Han Huck Yang, Ping-Chun Hsieh, Wen-Chih Peng, Vitaly Lavrukhin, Boris Ginsburg,
- Abstract summary: SEGALE is an evaluation scheme that extends existing automatic metrics to long-document translation.<n>Our approach enables previously unattainable document-level evaluation.<n> Experiments show our scheme significantly outperforms existing long-form document evaluation schemes.
- Score: 69.84659107448768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite Large Language Models (LLMs) demonstrating superior translation performance and long-context capabilities, evaluation methodologies remain constrained to sentence-level assessment due to dataset limitations, token number restrictions in metrics, and rigid sentence boundary requirements. We introduce SEGALE, an evaluation scheme that extends existing automatic metrics to long-document translation by treating documents as continuous text and applying sentence segmentation and alignment methods. Our approach enables previously unattainable document-level evaluation, handling translations of arbitrary length generated with document-level prompts while accounting for under-/over-translations and varied sentence boundaries. Experiments show our scheme significantly outperforms existing long-form document evaluation schemes, while being comparable to evaluations performed with groundtruth sentence alignments. Additionally, we apply our scheme to book-length texts and newly demonstrate that many open-weight LLMs fail to effectively translate documents at their reported maximum context lengths.
Related papers
- Same evaluation, more tokens: On the effect of input length for machine translation evaluation using Large Language Models [6.525298236457623]
Large language models (LLMs) can serve as reliable and interpretable sentence-level translation evaluators via MQM error span annotations.<n>We show that evaluation should be invariant to text length, producing consistent error spans regardless of input granularity.<n>We evaluate several strategies, including granularity-aligned prompting, Focus Sentence Prompting (FSP) and a fine-tuning approach to better align LLMs with the evaluation task.
arXiv Detail & Related papers (2025-05-03T09:30:26Z) - Multilingual Contextualization of Large Language Models for Document-Level Machine Translation [28.08957305340726]
Large language models (LLMs) have demonstrated strong performance in sentence-level machine translation.<n>We propose a method to improve LLM-based long-document translation through targeted fine-tuning on high-quality document-level data.<n>Our approach supports multiple translation paradigms, including direct document-to-document and chunk-level translation.
arXiv Detail & Related papers (2025-04-16T14:52:22Z) - DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory [96.35468670508476]
We introduce DelTA, a Document-levEL Translation Agent for large language models (LLMs)<n>DelTA features a multi-level memory structure that stores information across various granularities and spans.<n> Experimental results indicate that DelTA significantly outperforms strong baselines in terms of translation consistency and quality.
arXiv Detail & Related papers (2024-10-10T17:30:09Z) - Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA [71.04146366608904]
Long-context modeling capabilities have garnered widespread attention, leading to the emergence of Large Language Models (LLMs) with ultra-context windows.
We propose a novel long-context benchmark, Loong, aligning with realistic scenarios through extended multi-document question answering (QA)
Loong introduces four types of tasks with a range of context lengths: Spotlight Locating, Comparison, Clustering, and Chain of Reasoning.
arXiv Detail & Related papers (2024-06-25T09:42:56Z) - PROXYQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models [72.57329554067195]
ProxyQA is an innovative framework dedicated to assessing longtext generation.
It comprises in-depth human-curated meta-questions spanning various domains, each accompanied by specific proxy-questions with pre-annotated answers.
It assesses the generated content's quality through the evaluator's accuracy in addressing the proxy-questions.
arXiv Detail & Related papers (2024-01-26T18:12:25Z) - Enhancing Document-level Translation of Large Language Model via
Translation Mixed-instructions [24.025242477280983]
Existing large language models (LLMs) for machine translation are typically fine-tuned on sentence-level translation instructions.
This challenge arises from the issue of sentence-level coverage, where subsequent sentences in the document remain untranslated.
We propose an approach that combines sentence-level and document-level translation instructions of varying lengths to fine-tune LLMs.
arXiv Detail & Related papers (2024-01-16T03:28:26Z) - In-context Pretraining: Language Modeling Beyond Document Boundaries [137.53145699439898]
In-Context Pretraining is a new approach where language models are pretrained on a sequence of related documents.
We introduce approximate algorithms for finding related documents with efficient nearest neighbor search.
We see notable improvements in tasks that require more complex contextual reasoning.
arXiv Detail & Related papers (2023-10-16T17:57:12Z) - LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive Summarisation [28.438103177230477]
We evaluate the efficacy of automatic metrics for assessing the factual consistency of long document text summarisation.
We propose a new evaluation framework, LongDocFACTScore, which is suitable for evaluating long document summarisation data sets.
arXiv Detail & Related papers (2023-09-21T19:54:54Z) - A Comparison of Approaches to Document-level Machine Translation [34.2276281264886]
This paper presents a systematic comparison of selected approaches to document-level phenomena evaluation suites.
We find that a simple method based purely on back-translating monolingual document-level data performs as well as much more elaborate alternatives.
arXiv Detail & Related papers (2021-01-26T19:21:09Z) - Re-evaluating Evaluation in Text Summarization [77.4601291738445]
We re-evaluate the evaluation method for text summarization using top-scoring system outputs.
We find that conclusions about evaluation metrics on older datasets do not necessarily hold on modern datasets and systems.
arXiv Detail & Related papers (2020-10-14T13:58:53Z) - Document-level Neural Machine Translation with Document Embeddings [82.4684444847092]
This work focuses on exploiting detailed document-level context in terms of multiple forms of document embeddings.
The proposed document-aware NMT is implemented to enhance the Transformer baseline by introducing both global and local document-level clues on the source end.
arXiv Detail & Related papers (2020-09-16T19:43:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.