Self-Consistent Narrative Prompts on Abductive Natural Language
Inference
- URL: http://arxiv.org/abs/2309.08303v1
- Date: Fri, 15 Sep 2023 10:48:10 GMT
- Title: Self-Consistent Narrative Prompts on Abductive Natural Language
Inference
- Authors: Chunkit Chan, Xin Liu, Tsz Ho Chan, Jiayang Cheng, Yangqiu Song, Ginny
Wong, Simon See
- Abstract summary: Abduction has long been seen as crucial for narrative comprehension and reasoning about everyday situations.
We propose a prompt tuning model $alpha$-PACE, which takes self-consistency and inter-sentential coherence into consideration.
- Score: 42.201304482932706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abduction has long been seen as crucial for narrative comprehension and
reasoning about everyday situations. The abductive natural language inference
($\alpha$NLI) task has been proposed, and this narrative text-based task aims
to infer the most plausible hypothesis from the candidates given two
observations. However, the inter-sentential coherence and the model consistency
have not been well exploited in the previous works on this task. In this work,
we propose a prompt tuning model $\alpha$-PACE, which takes self-consistency
and inter-sentential coherence into consideration. Besides, we propose a
general self-consistent framework that considers various narrative sequences
(e.g., linear narrative and reverse chronology) for guiding the pre-trained
language model in understanding the narrative context of input. We conduct
extensive experiments and thorough ablation studies to illustrate the necessity
and effectiveness of $\alpha$-PACE. The performance of our method shows
significant improvement against extensive competitive baselines.
Related papers
- Can Language Models Take A Hint? Prompting for Controllable Contextualized Commonsense Inference [12.941933077524919]
We introduce "hinting," a data augmentation technique that enhances contextualized commonsense inference.
"Hinting" employs a prefix prompting strategy using both hard and soft prompts to guide the inference process.
Our results show that "hinting" does not compromise the performance of contextual commonsense inference while offering improved controllability.
arXiv Detail & Related papers (2024-10-03T04:32:46Z) - Fine-Grained Modeling of Narrative Context: A Coherence Perspective via Retrospective Questions [48.18584733906447]
This work introduces an original and practical paradigm for narrative comprehension, stemming from the characteristics that individual passages within narratives tend to be more cohesively related than isolated.
We propose a fine-grained modeling of narrative context, by formulating a graph dubbed NarCo, which explicitly depicts task-agnostic coherence dependencies.
arXiv Detail & Related papers (2024-02-21T06:14:04Z) - DenoSent: A Denoising Objective for Self-Supervised Sentence
Representation Learning [59.4644086610381]
We propose a novel denoising objective that inherits from another perspective, i.e., the intra-sentence perspective.
By introducing both discrete and continuous noise, we generate noisy sentences and then train our model to restore them to their original form.
Our empirical evaluations demonstrate that this approach delivers competitive results on both semantic textual similarity (STS) and a wide range of transfer tasks.
arXiv Detail & Related papers (2024-01-24T17:48:45Z) - Topic-DPR: Topic-based Prompts for Dense Passage Retrieval [6.265789210037749]
We present Topic-DPR, a dense passage retrieval model that uses topic-based prompts.
We introduce a novel positive and negative sampling strategy, leveraging semi-structured data to boost dense retrieval efficiency.
arXiv Detail & Related papers (2023-10-10T13:45:24Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Probing as Quantifying the Inductive Bias of Pre-trained Representations [99.93552997506438]
We present a novel framework for probing where the goal is to evaluate the inductive bias of representations for a particular task.
We apply our framework to a series of token-, arc-, and sentence-level tasks.
arXiv Detail & Related papers (2021-10-15T22:01:16Z) - Dynamic Sliding Window for Meeting Summarization [25.805553277418813]
We analyze the linguistic characteristics of meeting transcripts on a representative corpus, and find that the sentences comprising the summary correlate with the meeting agenda.
We propose a dynamic sliding window strategy for meeting summarization.
arXiv Detail & Related papers (2021-08-31T05:39:48Z) - COINS: Dynamically Generating COntextualized Inference Rules for
Narrative Story Completion [16.676036625561057]
We present COINS, a framework that iteratively reads context sentences, generates contextualized inference rules, encodes them, and guides task-specific output generation.
By modularizing inference and sentence generation steps in a recurrent model, we aim to make reasoning steps and their effects on next sentence generation transparent.
Our automatic and manual evaluations show that the model generates better story sentences than SOTA baselines, especially in terms of coherence.
arXiv Detail & Related papers (2021-06-04T14:06:33Z) - Structural Pre-training for Dialogue Comprehension [51.215629336320305]
We present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features.
To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives.
Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.
arXiv Detail & Related papers (2021-05-23T15:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.