Shepherd Pre-trained Language Models to Develop a Train of Thought: An
Iterative Prompting Approach
- URL: http://arxiv.org/abs/2203.08383v1
- Date: Wed, 16 Mar 2022 04:12:20 GMT
- Title: Shepherd Pre-trained Language Models to Develop a Train of Thought: An
Iterative Prompting Approach
- Authors: Boshi Wang, Xiang Deng, Huan Sun
- Abstract summary: Pre-trained Language Models (PLMs) have been shown incapable of recalling knowledge to solve tasks requiring complex & multi-step inference procedures.
Similar to how humans develop a "train of thought" for these tasks, how can we equip PLMs with such abilities?
We propose an iterative context-aware prompter, which addresses these limitations by learning to dynamically synthesize conditioned prompts on the current step's contexts.
- Score: 30.117038793151004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Pre-trained Language Models (PLMs) internalize a great amount of world
knowledge, they have been shown incapable of recalling these knowledge to solve
tasks requiring complex & multi-step inference procedures. Similar to how
humans develop a "train of thought" for these tasks, how can we equip PLMs with
such abilities? In this work, we explore an iterative prompting framework, a
new prompting paradigm which progressively elicits relevant knowledge from PLMs
for multi-step inference tasks. We identify key limitations of existing
prompting methods, namely they are either restricted to queries with a single
identifiable relation/predicate, or being agnostic to input contexts, which
makes it difficult to capture variabilities across different inference steps.
We propose an iterative context-aware prompter, which addresses these
limitations by learning to dynamically synthesize prompts conditioned on the
current step's contexts. Experiments on three datasets involving multi-step
inference show the effectiveness of the iterative scheme and our proposed
prompter design.
Related papers
- ProcBench: Benchmark for Multi-Step Reasoning and Following Procedure [0.0]
We propose a benchmark that focuses on a specific aspect of reasoning ability: the direct evaluation of multi-step inference.
Our dataset comprises pairs of explicit instructions and corresponding questions, where the procedures necessary for solving the questions are entirely detailed within the instructions.
By constructing problems that require varying numbers of steps to solve and evaluating responses at each step, we enable a thorough assessment of state-of-the-art LLMs' ability to follow instructions.
arXiv Detail & Related papers (2024-10-04T03:21:24Z) - SpeechPrompt: Prompting Speech Language Models for Speech Processing Tasks [94.10497337235083]
We are first to explore the potential of prompting speech LMs in the domain of speech processing.
We reformulate speech processing tasks into speech-to-unit generation tasks.
We show that the prompting method can achieve competitive performance compared to the strong fine-tuning method.
arXiv Detail & Related papers (2024-08-23T13:00:10Z) - TemPrompt: Multi-Task Prompt Learning for Temporal Relation Extraction in RAG-based Crowdsourcing Systems [21.312052922118585]
Temporal relation extraction (TRE) aims to grasp the evolution of events or actions, and thus shape the workflow of associated tasks.
We propose a multi-task prompt learning framework for TRE (TemPrompt), incorporating prompt tuning and contrastive learning to tackle these issues.
arXiv Detail & Related papers (2024-06-21T01:52:37Z) - Scalable Language Model with Generalized Continual Learning [58.700439919096155]
The Joint Adaptive Re-ization (JARe) is integrated with Dynamic Task-related Knowledge Retrieval (DTKR) to enable adaptive adjustment of language models based on specific downstream tasks.
Our method demonstrates state-of-the-art performance on diverse backbones and benchmarks, achieving effective continual learning in both full-set and few-shot scenarios with minimal forgetting.
arXiv Detail & Related papers (2024-04-11T04:22:15Z) - Mastering Robot Manipulation with Multimodal Prompts through Pretraining and Multi-task Fine-tuning [49.92517970237088]
We tackle the problem of training a robot to understand multimodal prompts.
This type of task poses a major challenge to robots' capability to understand the interconnection and complementarity between vision and language signals.
We introduce an effective framework that learns a policy to perform robot manipulation with multimodal prompts.
arXiv Detail & Related papers (2023-10-14T22:24:58Z) - Diversity of Thought Improves Reasoning Abilities of LLMs [26.149914503910235]
Large language models (LLMs) are documented to struggle in settings that require complex reasoning.
We discuss how one can create and leverage variations of the input prompt as a means of diversity of thought.
arXiv Detail & Related papers (2023-10-11T00:01:41Z) - OverPrompt: Enhancing ChatGPT through Efficient In-Context Learning [49.38867353135258]
We propose OverPrompt, leveraging the in-context learning capability of LLMs to handle multiple task inputs.
Our experiments show that OverPrompt can achieve cost-efficient zero-shot classification without causing significant detriment to task performance.
arXiv Detail & Related papers (2023-05-24T10:08:04Z) - Making Pre-trained Language Models End-to-end Few-shot Learners with
Contrastive Prompt Tuning [41.15017636192417]
We present CP-Tuning, the first end-to-end Contrastive Prompt Tuning framework for fine-tuning Language Models.
It is integrated with the task-invariant continuous prompt encoding technique with fully trainable prompt parameters.
Experiments over a variety of language understanding tasks used in IR systems and different PLMs show that CP-Tuning outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-04-01T02:24:24Z) - CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented
Dialog Systems [56.302581679816775]
This paper proposes Comprehensive Instruction (CINS) that exploits PLMs with task-specific instructions.
We design a schema (definition, constraint, prompt) of instructions and their customized realizations for three important downstream tasks in ToD.
Experiments are conducted on these ToD tasks in realistic few-shot learning scenarios with small validation data.
arXiv Detail & Related papers (2021-09-10T03:23:06Z) - Learning to Ask Conversational Questions by Optimizing Levenshtein
Distance [83.53855889592734]
We introduce a Reinforcement Iterative Sequence Editing (RISE) framework that optimize the minimum Levenshtein distance (MLD) through explicit editing actions.
RISE is able to pay attention to tokens that are related to conversational characteristics.
Experimental results on two benchmark datasets show that RISE significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-06-30T08:44:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.