Exploiting stance hierarchies for cost-sensitive stance detection of Web
documents
- URL: http://arxiv.org/abs/2007.15121v2
- Date: Mon, 17 May 2021 17:10:02 GMT
- Title: Exploiting stance hierarchies for cost-sensitive stance detection of Web
documents
- Authors: Arjun Roy, Pavlos Fafalios, Asif Ekbal, Xiaofei Zhu, Stefan Dietze
- Abstract summary: stance detection aims at identifying the position (stance) of a document towards a claim.
We propose a modular pipeline of cascading binary classifiers, enabling performance tuning on a per step and class basis.
We implement our approach through a combination of neural and traditional classification models that highlight the misclassification costs of minority classes.
- Score: 24.898077978955406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fact checking is an essential challenge when combating fake news. Identifying
documents that agree or disagree with a particular statement (claim) is a core
task in this process. In this context, stance detection aims at identifying the
position (stance) of a document towards a claim. Most approaches address this
task through a 4-class classification model where the class distribution is
highly imbalanced. Therefore, they are particularly ineffective in detecting
the minority classes (for instance, 'disagree'), even though such instances are
crucial for tasks such as fact-checking by providing evidence for detecting
false claims. In this paper, we exploit the hierarchical nature of stance
classes, which allows us to propose a modular pipeline of cascading binary
classifiers, enabling performance tuning on a per step and class basis. We
implement our approach through a combination of neural and traditional
classification models that highlight the misclassification costs of minority
classes. Evaluation results demonstrate state-of-the-art performance of our
approach and its ability to significantly improve the classification
performance of the important 'disagree' class.
Related papers
- Classification Matters: Improving Video Action Detection with Class-Specific Attention [61.14469113965433]
Video action detection (VAD) aims to detect actors and classify their actions in a video.
We analyze how prevailing methods form features for classification and find that they prioritize actor regions.
We propose to reduce the bias toward actor and encourage paying attention to the context that is relevant to each action class.
arXiv Detail & Related papers (2024-07-29T04:43:58Z) - Rethinking Object Saliency Ranking: A Novel Whole-flow Processing
Paradigm [22.038715439842044]
This paper proposes a new paradigm for saliency ranking, which aims to completely focus on ranking salient objects by their "importance order"
The proposed approach outperforms existing state-of-the-art methods on the widely-used SALICON set.
arXiv Detail & Related papers (2023-12-06T01:51:03Z) - ReAct: Temporal Action Detection with Relational Queries [84.76646044604055]
This work aims at advancing temporal action detection (TAD) using an encoder-decoder framework with action queries.
We first propose a relational attention mechanism in the decoder, which guides the attention among queries based on their relations.
Lastly, we propose to predict the localization quality of each action query at inference in order to distinguish high-quality queries.
arXiv Detail & Related papers (2022-07-14T17:46:37Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - Rating Facts under Coarse-to-fine Regimes [0.533024001730262]
We collect 24K manually rated statements from PolitiFact.
Our task represents a twist from standard classification, due to the various degrees of similarity between classes.
After training, class similarity is sensible over the multi-class datasets, especially in the fine-grained one.
arXiv Detail & Related papers (2021-07-13T13:05:11Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Adversarial Examples and Metrics [14.068394742881425]
Adversarial examples are a type of attack on machine learning (ML) systems which cause misclassification of inputs.
We study the limitations of robust classification if the target metric is uncertain.
arXiv Detail & Related papers (2020-07-14T12:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.