Controlling Summarization Length Through EOS Token Weighting
- URL: http://arxiv.org/abs/2506.05017v1
- Date: Thu, 05 Jun 2025 13:25:28 GMT
- Title: Controlling Summarization Length Through EOS Token Weighting
- Authors: Zeno Belligoli, Emmanouil Stergiadis, Eran Fainman, Ilya Gusev,
- Abstract summary: Controlling the length of generated text can be crucial in various text-generation tasks, including summarization.<n>We develop a simple approach for controlling the length of automatic text summaries by increasing the importance of correctly predicting the EOS token in the cross-entropy loss computation.<n>We tested it with encoder-decoder and modern GPT-style LLMs, and show that this method can control generation length, often without affecting the quality of the summary.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Controlling the length of generated text can be crucial in various text-generation tasks, including summarization. Existing methods often require complex model alterations, limiting compatibility with pre-trained models. We address these limitations by developing a simple approach for controlling the length of automatic text summaries by increasing the importance of correctly predicting the EOS token in the cross-entropy loss computation. The proposed methodology is agnostic to architecture and decoding algorithms and orthogonal to other inference-time techniques to control the generation length. We tested it with encoder-decoder and modern GPT-style LLMs, and show that this method can control generation length, often without affecting the quality of the summary.
Related papers
- Fast Controlled Generation from Language Models with Adaptive Weighted Rejection Sampling [90.86991492288487]
evaluating constraint on every token can be prohibitively expensive.<n> LCD can distort the global distribution over strings, sampling tokens based only on local information.<n>We show that our approach is superior to state-of-the-art baselines.
arXiv Detail & Related papers (2025-04-07T18:30:18Z) - Zero-Shot Strategies for Length-Controllable Summarization [56.15356055672189]
Large language models (LLMs) struggle with precise length control, particularly in zero-shot settings.<n>We conduct a comprehensive study evaluating LLMs' length control capabilities across multiple measures and propose practical methods to improve controllability.<n>Our experiments with LLaMA 3 reveal stark differences in length adherence across measures and highlight inherent biases of the model.
arXiv Detail & Related papers (2024-12-31T02:53:27Z) - Length Controlled Generation for Black-box LLMs [70.57649832433451]
Large language models (LLMs) have demonstrated impressive instruction following capabilities, but struggle to accurately manage the length of generated text.<n>We propose a novel iterative sampling framework for text length control, integrating the Metropolis-Hastings algorithm with an importance sampling acceleration strategy.<n>Our framework achieves almost 100% success rates of length control on Llama3.1 for tasks such as length-controlled abstractive summarization.
arXiv Detail & Related papers (2024-12-19T09:07:38Z) - Reinforcement Learning with Token-level Feedback for Controllable Text Generation [16.117006822479407]
We propose a novel reinforcement learning algorithm named TOLE which formulates TOken-LEvel rewards for controllable text generation.
Experimental results show that our algorithm can achieve superior performance on both single-attribute and multi-attribute control tasks.
arXiv Detail & Related papers (2024-03-18T08:18:37Z) - LiFi: Lightweight Controlled Text Generation with Fine-Grained Control
Codes [46.74968005604948]
We present LIFI, which offers a lightweight approach with fine-grained control for controlled text generation.
We evaluate LIFI on two conventional tasks -- sentiment control and topic control -- and one newly proposed task -- stylistic novel writing.
arXiv Detail & Related papers (2024-02-10T11:53:48Z) - Prompt-Based Length Controlled Generation with Reinforcement Learning [48.49553921757085]
We propose a prompt-based length control method to achieve high-accuracy length controlled generation.
We adopt reinforcement learning with the reward signal given by either trainable or rule-based reward models.
Our method significantly improves the accuracy of prompt-based length control for summarization task on popular datasets like CNNDM and NYT.
arXiv Detail & Related papers (2023-08-23T09:43:10Z) - Summarization with Precise Length Control [23.688834410051]
We present a framework to generate summaries with precisely the specified number of tokens or sentences.
We jointly train the models to predict the lengths, so our model can generate summaries with optimal length.
arXiv Detail & Related papers (2023-05-09T04:45:24Z) - An Extensible Plug-and-Play Method for Multi-Aspect Controllable Text
Generation [70.77243918587321]
Multi-aspect controllable text generation that controls generated text in multiple aspects has attracted increasing attention.
We provide a theoretical lower bound for the interference and empirically found that the interference grows with the number of layers where prefixes are inserted.
We propose using trainable gates to normalize the intervention of prefixes to restrain the growing interference.
arXiv Detail & Related papers (2022-12-19T11:53:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.