Document-Level Text Generation with Minimum Bayes Risk Decoding using Optimal Transport
- URL: http://arxiv.org/abs/2505.23078v1
- Date: Thu, 29 May 2025 04:34:04 GMT
- Title: Document-Level Text Generation with Minimum Bayes Risk Decoding using Optimal Transport
- Authors: Yuu Jinnai,
- Abstract summary: We investigate the adaption of Minimum Bayes Risk (MBR) decoding for document-level text generation tasks.<n>MBR decoding makes use of a utility function to estimate the output with the highest expected utility from a set of candidate outputs.<n>MBR-OT, a variant of MBR decoding using Wasserstein distance, computes the utility of a document using a sentence-level utility function.
- Score: 3.48097307252416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Document-level text generation tasks are known to be more difficult than sentence-level text generation tasks as they require the understanding of longer context to generate high-quality texts. In this paper, we investigate the adaption of Minimum Bayes Risk (MBR) decoding for document-level text generation tasks. MBR decoding makes use of a utility function to estimate the output with the highest expected utility from a set of candidate outputs. Although MBR decoding is shown to be effective in a wide range of sentence-level text generation tasks, its performance on document-level text generation tasks is limited as many of the utility functions are designed for evaluating the utility of sentences. To this end, we propose MBR-OT, a variant of MBR decoding using Wasserstein distance to compute the utility of a document using a sentence-level utility function. The experimental result shows that the performance of MBR-OT outperforms that of the standard MBR in document-level machine translation, text simplification, and dense image captioning tasks. Our code is available at https://github.com/jinnaiyuu/mbr-optimal-transport
Related papers
- GigaCheck: Detecting LLM-generated Content [72.27323884094953]
In this work, we investigate the task of generated text detection by proposing the GigaCheck.
Our research explores two approaches: (i) distinguishing human-written texts from LLM-generated ones, and (ii) detecting LLM-generated intervals in Human-Machine collaborative texts.
Specifically, we use a fine-tuned general-purpose LLM in conjunction with a DETR-like detection model, adapted from computer vision, to localize AI-generated intervals within text.
arXiv Detail & Related papers (2024-10-31T08:30:55Z) - mbrs: A Library for Minimum Bayes Risk Decoding [27.207891251898904]
mbrs is a library of Minimum Bayes risk (MBR) decoding.
MBR is a decision rule of text generation tasks that outperforms conventional maximum a posterior (MAP) decoding.
We published our mbrs as an MIT-licensed open-source project, and the code is available on GitHub.
arXiv Detail & Related papers (2024-08-08T02:28:32Z) - Unveiling the Power of Source: Source-based Minimum Bayes Risk Decoding for Neural Machine Translation [30.323103270892734]
Minimum Bayes Risk (MBR) decoding offers an alternative by seeking hypotheses with the highest expected utility.<n>Our findings suggest that sMBR is a promising approach for NMT decoding.
arXiv Detail & Related papers (2024-06-17T15:13:52Z) - Efficient Minimum Bayes Risk Decoding using Low-Rank Matrix Completion Algorithms [19.543681023903456]
We formulate Minimum Bayes Risk (MBR) decoding as a matrix completion problem.
We exploit this by only computing a random subset of the scores and efficiently recover the missing entries in the matrix.
Our experimental results on machine translation tasks demonstrate that the proposed method requires 1/16 utility metric computations.
arXiv Detail & Related papers (2024-06-05T00:54:03Z) - Linear-time Minimum Bayes Risk Decoding with Reference Aggregation [52.1701152610258]
Minimum Bayes Risk (MBR) decoding is a text generation technique that has been shown to improve the quality of machine translations.
It requires the pairwise calculation of a utility metric, which has quadratic complexity.
We propose to approximate pairwise metric scores with scores calculated against aggregated reference representations.
arXiv Detail & Related papers (2024-02-06T18:59:30Z) - Generating Diverse and High-Quality Texts by Minimum Bayes Risk Decoding [4.209844101827474]
We develop diversity-promoting decoding algorithms by enforcing diversity objectives to Minimum Bayes-Risk decoding.
We evaluate DMBR and KMBR on a variety of directed text generation tasks using encoder-decoder models and a large language model with prompting.
arXiv Detail & Related papers (2024-01-10T10:23:41Z) - Hyperparameter-Free Approach for Faster Minimum Bayes Risk Decoding [5.639904484784127]
Minimum Bayes-Risk (MBR) decoding is a powerful alternative to beam search decoding for a wide range of text generation tasks.
MBR requires a huge amount of time for inference to compute the objective.
Confidence-based pruning (CBP) has recently been proposed to reduce the inference time in machine translation tasks.
arXiv Detail & Related papers (2024-01-05T11:02:08Z) - It's MBR All the Way Down: Modern Generation Techniques Through the Lens
of Minimum Bayes Risk [57.641436861482696]
Minimum Bayes Risk (MBR) decoding is a method for choosing the outputs of a machine learning system based not on the output with the highest probability, but the output with the lowest risk (expected error) among multiple candidates.
arXiv Detail & Related papers (2023-10-02T17:47:10Z) - Towards Codable Watermarking for Injecting Multi-bits Information to LLMs [86.86436777626959]
Large language models (LLMs) generate texts with increasing fluency and realism.
Existing watermarking methods are encoding-inefficient and cannot flexibly meet the diverse information encoding needs.
We propose Codable Text Watermarking for LLMs (CTWL) that allows text watermarks to carry multi-bit customizable information.
arXiv Detail & Related papers (2023-07-29T14:11:15Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - LLMDet: A Third Party Large Language Models Generated Text Detection
Tool [119.0952092533317]
Large language models (LLMs) are remarkably close to high-quality human-authored text.
Existing detection tools can only differentiate between machine-generated and human-authored text.
We propose LLMDet, a model-specific, secure, efficient, and extendable detection tool.
arXiv Detail & Related papers (2023-05-24T10:45:16Z) - Composable Text Controls in Latent Space with ODEs [97.12426987887021]
This paper proposes a new efficient approach for composable text operations in the compact latent space of text.
By connecting pretrained LMs to the latent space through efficient adaption, we then decode the sampled vectors into desired text sequences.
Experiments show that composing those operators within our approach manages to generate or edit high-quality text.
arXiv Detail & Related papers (2022-08-01T06:51:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.