Unlocking Anticipatory Text Generation: A Constrained Approach for Large Language Models Decoding
- URL: http://arxiv.org/abs/2312.06149v4
- Date: Fri, 04 Oct 2024 16:48:51 GMT
- Title: Unlocking Anticipatory Text Generation: A Constrained Approach for Large Language Models Decoding
- Authors: Lifu Tu, Semih Yavuz, Jin Qu, Jiacheng Xu, Rui Meng, Caiming Xiong, Yingbo Zhou,
- Abstract summary: Large Language Models (LLMs) have demonstrated a powerful ability for text generation.
undesired behaviors such as toxicity or hallucinations can manifest.
We propose formalizing text generation as a future-constrained generation problem.
- Score: 75.06872859716049
- License:
- Abstract: Large Language Models (LLMs) have demonstrated a powerful ability for text generation. However, achieving optimal results with a given prompt or instruction can be challenging, especially for billion-sized models. Additionally, undesired behaviors such as toxicity or hallucinations can manifest. While much larger models (e.g., ChatGPT) may demonstrate strength in mitigating these issues, there is still no guarantee of complete prevention. In this work, we propose formalizing text generation as a future-constrained generation problem to minimize undesirable behaviors and enforce faithfulness to instructions. The estimation of future constraint satisfaction, accomplished using LLMs, guides the text generation process. Our extensive experiments demonstrate the effectiveness of the proposed approach across three distinct text generation tasks: keyword-constrained generation (Lin et al., 2020), toxicity reduction (Gehman et al., 2020), and factual correctness in question-answering (Gao et al., 2023).
Related papers
- Deliberate then Generate: Enhanced Prompting Framework for Text
Generation [70.10319005141888]
Deliberate then Generate (DTG) prompting framework consists of error detection instructions and candidates that may contain errors.
We conduct extensive experiments on 20+ datasets across 7 text generation tasks, including summarization, translation, dialogue, and more.
We show that DTG consistently outperforms existing prompting methods and achieves state-of-the-art performance on multiple text generation tasks.
arXiv Detail & Related papers (2023-05-31T13:23:04Z) - Tractable Control for Autoregressive Language Generation [82.79160918147852]
We propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models.
We show that GeLaTo achieves state-of-the-art performance on challenging benchmarks for constrained text generation.
Our work opens up new avenues for controlling large language models and also motivates the development of more expressive TPMs.
arXiv Detail & Related papers (2023-04-15T00:19:44Z) - Constructing Highly Inductive Contexts for Dialogue Safety through
Controllable Reverse Generation [65.48908724440047]
We propose a method called emphreverse generation to construct adversarial contexts conditioned on a given response.
We test three popular pretrained dialogue models (Blender, DialoGPT, and Plato2) and find that BAD+ can largely expose their safety problems.
arXiv Detail & Related papers (2022-12-04T12:23:41Z) - Why is constrained neural language generation particularly challenging? [13.62873478165553]
We present an extensive survey on the emerging topic of constrained neural language generation.
We distinguish between conditions and constraints, present constrained text generation tasks, and review existing methods and evaluation metrics for constrained text generation.
Our aim is to highlight recent progress and trends in this emerging field, informing on the most promising directions and limitations towards advancing the state-of-the-art of constrained neural language generation research.
arXiv Detail & Related papers (2022-06-11T02:07:33Z) - NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead
Heuristics [73.96837492216204]
We propose NeuroLogic A*esque, a decoding algorithm that incorporates estimates of future cost.
We develop efficient lookaheads that are efficient for large-scale language models.
Our approach achieves competitive baselines on five generation tasks, and new state-of-the-art performance on table-to-text generation, constrained machine translation, and keyword-constrained generation.
arXiv Detail & Related papers (2021-12-16T09:22:54Z) - Extract, Denoise, and Enforce: Evaluating and Predicting Lexical
Constraints for Conditional Text Generation [31.341566859483056]
We present a systematic analysis of conditional generation to study whether current PLMs are good enough for preserving important concepts in the input.
We propose a framework for automatic constraint extraction, denoising, and enforcement that is shown to perform comparably or better than unconstrained generation.
arXiv Detail & Related papers (2021-04-18T05:29:02Z) - Facts2Story: Controlling Text Generation by Key Facts [0.0]
We propose a controlled generation task based on expanding a sequence of facts, expressed in natural language, into a longer narrative.
We show that while auto-regressive, unidirectional Language Models such as GPT2 produce better fluency, they struggle to adhere to the requested facts.
We propose a plan-and-cloze model (using fine-tuned XLNet) which produces competitive fluency while adhering to the requested content.
arXiv Detail & Related papers (2020-12-08T10:14:29Z) - TextGAIL: Generative Adversarial Imitation Learning for Text Generation [68.3579946817937]
We propose a generative adversarial imitation learning framework for text generation that uses large pre-trained language models to provide more reliable reward guidance.
Our approach uses contrastive discriminator, and proximal policy optimization (PPO) to stabilize and improve text generation performance.
arXiv Detail & Related papers (2020-04-07T00:24:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.