Prompt, Condition, and Generate: Classification of Unsupported Claims
with In-Context Learning
- URL: http://arxiv.org/abs/2309.10359v1
- Date: Tue, 19 Sep 2023 06:42:37 GMT
- Title: Prompt, Condition, and Generate: Classification of Unsupported Claims
with In-Context Learning
- Authors: Peter Ebert Christensen, Srishti Yadav, Serge Belongie
- Abstract summary: We focus on fine-grained debate topics and formulate a new task of distilling a countable set of narratives.
We present a crowdsourced dataset of 12 controversial topics, comprising more than 120k arguments, claims, and comments from heterogeneous sources, each annotated with a narrative label.
We find that generated claims with supported evidence can be used to improve the performance of narrative classification models.
- Score: 5.893124686141782
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupported and unfalsifiable claims we encounter in our daily lives can
influence our view of the world. Characterizing, summarizing, and -- more
generally -- making sense of such claims, however, can be challenging. In this
work, we focus on fine-grained debate topics and formulate a new task of
distilling, from such claims, a countable set of narratives. We present a
crowdsourced dataset of 12 controversial topics, comprising more than 120k
arguments, claims, and comments from heterogeneous sources, each annotated with
a narrative label. We further investigate how large language models (LLMs) can
be used to synthesise claims using In-Context Learning. We find that generated
claims with supported evidence can be used to improve the performance of
narrative classification models and, additionally, that the same model can
infer the stance and aspect using a few training examples. Such a model can be
useful in applications which rely on narratives , e.g. fact-checking.
Related papers
- Causal Micro-Narratives [62.47217054314046]
We present a novel approach to classify causal micro-narratives from text.
These narratives are sentence-level explanations of the cause(s) and/or effect(s) of a target subject.
arXiv Detail & Related papers (2024-10-07T17:55:10Z) - Can Language Models Take A Hint? Prompting for Controllable Contextualized Commonsense Inference [12.941933077524919]
We introduce "hinting," a data augmentation technique that enhances contextualized commonsense inference.
"Hinting" employs a prefix prompting strategy using both hard and soft prompts to guide the inference process.
Our results show that "hinting" does not compromise the performance of contextual commonsense inference while offering improved controllability.
arXiv Detail & Related papers (2024-10-03T04:32:46Z) - Enhancing Argument Structure Extraction with Efficient Leverage of
Contextual Information [79.06082391992545]
We propose an Efficient Context-aware model (ECASE) that fully exploits contextual information.
We introduce a sequence-attention module and distance-weighted similarity loss to aggregate contextual information and argumentative information.
Our experiments on five datasets from various domains demonstrate that our model achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-08T08:47:10Z) - Which Argumentative Aspects of Hate Speech in Social Media can be
reliably identified? [2.7647400328727256]
It is unclear which aspects of argumentation can be reliably identified and integrated in language models.
We show that some components can be identified with reasonable reliability.
We propose adaptations of those categories that can be more reliably reproduced.
arXiv Detail & Related papers (2023-06-05T15:50:57Z) - Can NLP Models Correctly Reason Over Contexts that Break the Common
Assumptions? [14.991565484636745]
We investigate the ability of NLP models to correctly reason over contexts that break the common assumptions.
We show that while doing fairly well on contexts that follow the common assumptions, the models struggle to correctly reason over contexts that break those assumptions.
Specifically, the performance gap is as high as 20% absolute points.
arXiv Detail & Related papers (2023-05-20T05:20:37Z) - Topics in the Haystack: Extracting and Evaluating Topics beyond
Coherence [0.0]
We propose a method that incorporates a deeper understanding of both sentence and document themes.
This allows our model to detect latent topics that may include uncommon words or neologisms.
We present correlation coefficients with human identification of intruder words and achieve near-human level results at the word-intrusion task.
arXiv Detail & Related papers (2023-03-30T12:24:25Z) - ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive
Summarization with Argument Mining [61.82562838486632]
We crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads.
We benchmark state-of-the-art models on our datasets and analyze characteristics associated with the data.
arXiv Detail & Related papers (2021-06-01T22:17:13Z) - Learning from Context or Names? An Empirical Study on Neural Relation
Extraction [112.06614505580501]
We study the effect of two main information sources in text: textual context and entity mentions (names)
We propose an entity-masked contrastive pre-training framework for relation extraction (RE)
Our framework can improve the effectiveness and robustness of neural models in different RE scenarios.
arXiv Detail & Related papers (2020-10-05T11:21:59Z) - Abstractive Summarization of Spoken and Written Instructions with BERT [66.14755043607776]
We present the first application of the BERTSum model to conversational language.
We generate abstractive summaries of narrated instructional videos across a wide variety of topics.
We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
arXiv Detail & Related papers (2020-08-21T20:59:34Z) - Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling [81.33107307509718]
We propose a topic adaptive storyteller to model the ability of inter-topic generalization.
We also propose a prototype encoding structure to model the ability of intra-topic derivation.
Experimental results show that topic adaptation and prototype encoding structure mutually bring benefit to the few-shot model.
arXiv Detail & Related papers (2020-08-11T03:55:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.