Deliberate then Generate: Enhanced Prompting Framework for Text
Generation
- URL: http://arxiv.org/abs/2305.19835v1
- Date: Wed, 31 May 2023 13:23:04 GMT
- Title: Deliberate then Generate: Enhanced Prompting Framework for Text
Generation
- Authors: Bei Li, Rui Wang, Junliang Guo, Kaitao Song, Xu Tan, Hany Hassan, Arul
Menezes, Tong Xiao, Jiang Bian and JingBo Zhu
- Abstract summary: Deliberate then Generate (DTG) prompting framework consists of error detection instructions and candidates that may contain errors.
We conduct extensive experiments on 20+ datasets across 7 text generation tasks, including summarization, translation, dialogue, and more.
We show that DTG consistently outperforms existing prompting methods and achieves state-of-the-art performance on multiple text generation tasks.
- Score: 70.10319005141888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have shown remarkable success across a wide
range of natural language generation tasks, where proper prompt designs make
great impacts. While existing prompting methods are normally restricted to
providing correct information, in this paper, we encourage the model to
deliberate by proposing a novel Deliberate then Generate (DTG) prompting
framework, which consists of error detection instructions and candidates that
may contain errors. DTG is a simple yet effective technique that can be applied
to various text generation tasks with minimal modifications. We conduct
extensive experiments on 20+ datasets across 7 text generation tasks, including
summarization, translation, dialogue, and more. We show that DTG consistently
outperforms existing prompting methods and achieves state-of-the-art
performance on multiple text generation tasks. We also provide in-depth
analyses to reveal the underlying mechanisms of DTG, which may inspire future
research on prompting for LLMs.
Related papers
- GigaCheck: Detecting LLM-generated Content [72.27323884094953]
In this work, we investigate the task of generated text detection by proposing the GigaCheck.
Our research explores two approaches: (i) distinguishing human-written texts from LLM-generated ones, and (ii) detecting LLM-generated intervals in Human-Machine collaborative texts.
Specifically, we use a fine-tuned general-purpose LLM in conjunction with a DETR-like detection model, adapted from computer vision, to localize AI-generated intervals within text.
arXiv Detail & Related papers (2024-10-31T08:30:55Z) - Controllable Text Generation for Large Language Models: A Survey [27.110528099257156]
This paper systematically reviews the latest advancements in Controllable Text Generation for Large Language Models.
We categorize CTG tasks into two primary types: content control and control.
We address key challenges in current research, including reduced fluency and practicality.
arXiv Detail & Related papers (2024-08-22T17:59:04Z) - Controllable Text Generation in the Instruction-Tuning Era [3.310278632293704]
We find that prompting-based approaches outperform controllable text generation methods on most datasets and tasks.
We provide an algorithm that uses only a task dataset and a Large Language Model with in-context capabilities to automatically generate a constraint dataset.
arXiv Detail & Related papers (2024-05-02T17:24:30Z) - Plug and Play with Prompts: A Prompt Tuning Approach for Controlling Text Generation [16.49758711633611]
Large Language Models (LLMs) have shown exceptional language generation capabilities in response to text-based prompts.
In this work, we explore the use of Prompt Tuning to achieve controlled language generation.
We demonstrate the efficacy of our method towards mitigating harmful, toxic, and biased text generated by language models.
arXiv Detail & Related papers (2024-04-08T01:54:28Z) - Meta-Task Prompting Elicits Embeddings from Large Language Models [54.757445048329735]
We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation.
We generate high-quality sentence embeddings from Large Language Models without the need for model fine-tuning.
Our findings suggest a new scaling law, offering a versatile and resource-efficient approach for embedding generation across diverse scenarios.
arXiv Detail & Related papers (2024-02-28T16:35:52Z) - Unlocking Anticipatory Text Generation: A Constrained Approach for Large Language Models Decoding [75.06872859716049]
Large Language Models (LLMs) have demonstrated a powerful ability for text generation.
undesired behaviors such as toxicity or hallucinations can manifest.
We propose formalizing text generation as a future-constrained generation problem.
arXiv Detail & Related papers (2023-12-11T06:35:33Z) - FacTool: Factuality Detection in Generative AI -- A Tool Augmented
Framework for Multi-Task and Multi-Domain Scenarios [87.12753459582116]
A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models.
We propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models.
arXiv Detail & Related papers (2023-07-25T14:20:51Z) - Learning to Transfer Prompts for Text Generation [97.64625999380425]
We propose a novel prompt-based method (PTG) for text generation in a transferable setting.
First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.
In extensive experiments, PTG yields competitive or better results than fine-tuning methods.
arXiv Detail & Related papers (2022-05-03T14:53:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.