Tribrid: Stance Classification with Neural Inconsistency Detection
- URL: http://arxiv.org/abs/2109.06508v1
- Date: Tue, 14 Sep 2021 08:13:03 GMT
- Title: Tribrid: Stance Classification with Neural Inconsistency Detection
- Authors: Song Yang and Jacopo Urbani
- Abstract summary: We study the problem of performing automatic stance classification on social media with neural architectures such as BERT.
We present a new neural architecture where the input also includes automatically generated negated perspectives over a given claim.
The model is jointly learned to make simultaneously multiple predictions, which can be used either to improve the classification of the original perspective or to filter out doubtful predictions.
- Score: 9.150728831518459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of performing automatic stance classification on social
media with neural architectures such as BERT. Although these architectures
deliver impressive results, their level is not yet comparable to the one of
humans and they might produce errors that have a significant impact on the
downstream task (e.g., fact-checking). To improve the performance, we present a
new neural architecture where the input also includes automatically generated
negated perspectives over a given claim. The model is jointly learned to make
simultaneously multiple predictions, which can be used either to improve the
classification of the original perspective or to filter out doubtful
predictions. In the first case, we propose a weakly supervised method for
combining the predictions into a final one. In the second case, we show that
using the confidence scores to remove doubtful predictions allows our method to
achieve human-like performance over the retained information, which is still a
sizable part of the original input.
Related papers
- TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Efficient and Robust Classification for Sparse Attacks [34.48667992227529]
We consider perturbations bounded by the $ell$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.
We propose a novel defense method that consists of "truncation" and "adrial training"
Motivated by the insights we obtain, we extend these components to neural network classifiers.
arXiv Detail & Related papers (2022-01-23T21:18:17Z) - Learning Uncertainty with Artificial Neural Networks for Improved
Remaining Time Prediction of Business Processes [0.15229257192293202]
This paper is the first to apply these techniques to predictive process monitoring.
We found that they contribute towards more accurate predictions and work quickly.
This leads to many interesting applications, enables an earlier adoption of prediction systems with smaller datasets and fosters a better cooperation with humans.
arXiv Detail & Related papers (2021-05-12T10:18:57Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Reinforcement Based Learning on Classification Task Could Yield Better
Generalization and Adversarial Accuracy [0.0]
We propose a novel method to train deep learning models on an image classification task.
We use a reward-based optimization function, similar to the vanilla policy gradient method used in reinforcement learning.
arXiv Detail & Related papers (2020-12-08T11:03:17Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Reachable Sets of Classifiers and Regression Models: (Non-)Robustness
Analysis and Robust Training [1.0878040851638]
We analyze and enhance robustness properties of both classifiers and regression models.
Specifically, we verify (non-)robustness, propose a robust training procedure, and show that our approach outperforms adversarial attacks.
Second, we provide techniques to distinguish between reliable and non-reliable predictions for unlabeled inputs, to quantify the influence of each feature on a prediction, and compute a feature ranking.
arXiv Detail & Related papers (2020-07-28T10:58:06Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Regularizing Class-wise Predictions via Self-knowledge Distillation [80.76254453115766]
We propose a new regularization method that penalizes the predictive distribution between similar samples.
This results in regularizing the dark knowledge (i.e., the knowledge on wrong predictions) of a single network.
Our experimental results on various image classification tasks demonstrate that the simple yet powerful method can significantly improve the generalization ability.
arXiv Detail & Related papers (2020-03-31T06:03:51Z) - Undersensitivity in Neural Reading Comprehension [36.142792758501706]
Current reading comprehension models generalise well to in-distribution test sets, yet perform poorly on adversarially selected inputs.
We focus on the complementary problem of excessive prediction undersensitivity, where input text is meaningfully changed but the model's prediction does not.
We formulate a noisy adversarial attack which searches among semantic variations of the question for which a model erroneously predicts the same answer.
arXiv Detail & Related papers (2020-02-15T19:03:36Z) - Binary Classification from Positive Data with Skewed Confidence [85.18941440826309]
Positive-confidence (Pconf) classification is a promising weakly-supervised learning method.
In practice, the confidence may be skewed by bias arising in an annotation process.
We introduce the parameterized model of the skewed confidence, and propose the method for selecting the hyper parameter.
arXiv Detail & Related papers (2020-01-29T00:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.