Transfer and Active Learning for Dissonance Detection: Addressing the
Rare-Class Challenge
- URL: http://arxiv.org/abs/2305.02459v2
- Date: Fri, 5 May 2023 00:56:19 GMT
- Title: Transfer and Active Learning for Dissonance Detection: Addressing the
Rare-Class Challenge
- Authors: Vasudha Varadarajan, Swanie Juhng, Syeda Mahwish, Xiaoran Liu, Jonah
Luby, Christian Luhmann, H. Andrew Schwartz
- Abstract summary: We propose and investigate transfer- and active learning solutions to the rare class problem of dissonance detection.
We perform experiments for a specific rare class problem: collecting language samples of cognitive dissonance from social media.
We find that probability-of-rare-class (PRC) is a simple and effective strategy to guide annotations and ultimately improve model accuracy.
- Score: 7.61140479230184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While transformer-based systems have enabled greater accuracies with fewer
training examples, data acquisition obstacles still persist for rare-class
tasks -- when the class label is very infrequent (e.g. < 5% of samples). Active
learning has in general been proposed to alleviate such challenges, but choice
of selection strategy, the criteria by which rare-class examples are chosen,
has not been systematically evaluated. Further, transformers enable iterative
transfer-learning approaches. We propose and investigate transfer- and active
learning solutions to the rare class problem of dissonance detection through
utilizing models trained on closely related tasks and the evaluation of
acquisition strategies, including a proposed probability-of-rare-class (PRC)
approach. We perform these experiments for a specific rare class problem:
collecting language samples of cognitive dissonance from social media. We find
that PRC is a simple and effective strategy to guide annotations and ultimately
improve model accuracy while transfer-learning in a specific order can improve
the cold-start performance of the learner but does not benefit iterations of
active learning.
Related papers
- Towards Understanding the Feasibility of Machine Unlearning [14.177012256360635]
We present a set of novel metrics for quantifying the difficulty of unlearning.
Specifically, we propose several metrics to assess the conditions necessary for a successful unlearning operation.
We also present a ranking mechanism to identify the most challenging samples to unlearn.
arXiv Detail & Related papers (2024-10-03T23:41:42Z) - Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Frugal Reinforcement-based Active Learning [12.18340575383456]
We propose a novel active learning approach for label-efficient training.
The proposed method is iterative and aims at minimizing a constrained objective function that mixes diversity, representativity and uncertainty criteria.
We also introduce a novel weighting mechanism based on reinforcement learning, which adaptively balances these criteria at each training iteration.
arXiv Detail & Related papers (2022-12-09T14:17:45Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - ALLSH: Active Learning Guided by Local Sensitivity and Hardness [98.61023158378407]
We propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function.
Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks.
arXiv Detail & Related papers (2022-05-10T15:39:11Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z) - Mind Your Outliers! Investigating the Negative Impact of Outliers on
Active Learning for Visual Question Answering [71.15403434929915]
We show that across 5 models and 4 datasets on the task of visual question answering, a wide variety of active learning approaches fail to outperform random selection.
We identify the problem as collective outliers -- groups of examples that active learning methods prefer to acquire but models fail to learn.
We show that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases.
arXiv Detail & Related papers (2021-07-06T00:52:11Z) - SID: Incremental Learning for Anchor-Free Object Detection via Selective
and Inter-Related Distillation [16.281712605385316]
Incremental learning requires a model to continually learn new tasks from streaming data.
Traditional fine-tuning of a well-trained deep neural network on a new task will dramatically degrade performance on the old task.
We propose a novel incremental learning paradigm called Selective and Inter-related Distillation (SID)
arXiv Detail & Related papers (2020-12-31T04:12:06Z) - Message Passing Adaptive Resonance Theory for Online Active
Semi-supervised Learning [30.19936050747407]
We propose Message Passing Adaptive Resonance Theory (MPART) for online active semi-supervised learning.
MPART infers the class of unlabeled data and selects informative and representative samples through message passing between nodes on the topological graph.
We evaluate our model with comparable query selection strategies and frequencies, showing that MPART significantly outperforms the competitive models in online active learning environments.
arXiv Detail & Related papers (2020-12-02T14:14:42Z) - Progressive Multi-Stage Learning for Discriminative Tracking [25.94944743206374]
We propose a joint discriminative learning scheme with the progressive multi-stage optimization policy of sample selection for robust visual tracking.
The proposed scheme presents a novel time-weighted and detection-guided self-paced learning strategy for easy-to-hard sample selection.
Experiments on the benchmark datasets demonstrate the effectiveness of the proposed learning framework.
arXiv Detail & Related papers (2020-04-01T07:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.