Dominant Set-based Active Learning for Text Classification and its
Application to Online Social Media
- URL: http://arxiv.org/abs/2202.00540v1
- Date: Fri, 28 Jan 2022 19:19:03 GMT
- Title: Dominant Set-based Active Learning for Text Classification and its
Application to Online Social Media
- Authors: Toktam A. Oghaz, Ivan Garibay
- Abstract summary: We present a novel pool-based active learning method for the training of large unlabeled corpus with minimum annotation cost.
Our proposed method does not have any parameters to be tuned, making it dataset-independent.
Our method achieves a higher performance in comparison to the state-of-the-art active learning strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in natural language processing (NLP) in online social media
are evidently owed to large-scale datasets. However, labeling, storing, and
processing a large number of textual data points, e.g., tweets, has remained
challenging. On top of that, in applications such as hate speech detection,
labeling a sufficiently large dataset containing offensive content can be
mentally and emotionally taxing for human annotators. Thus, NLP methods that
can make the best use of significantly less labeled data points are of great
interest. In this paper, we present a novel pool-based active learning method
that can be used for the training of large unlabeled corpus with minimum
annotation cost. For that, we propose to find the dominant sets of local
clusters in the feature space. These sets represent maximally cohesive
structures in the data. Then, the samples that do not belong to any of the
dominant sets are selected to be used to train the model, as they represent the
boundaries of the local clusters and are more challenging to classify. Our
proposed method does not have any parameters to be tuned, making it
dataset-independent, and it can approximately achieve the same classification
accuracy as full training data, with significantly fewer data points.
Additionally, our method achieves a higher performance in comparison to the
state-of-the-art active learning strategies. Furthermore, our proposed
algorithm is able to incorporate conventional active learning scores, such as
uncertainty-based scores, into its selection criteria. We show the
effectiveness of our method on different datasets and using different neural
network architectures.
Related papers
- Classification Tree-based Active Learning: A Wrapper Approach [4.706932040794696]
This paper proposes a wrapper active learning method for classification, organizing the sampling process into a tree structure.
A classification tree constructed on an initial set of labeled samples is considered to decompose the space into low-entropy regions.
This adaptation proves to be a significant enhancement over existing active learning methods.
arXiv Detail & Related papers (2024-04-15T17:27:00Z) - Exploring Data Redundancy in Real-world Image Classification through
Data Selection [20.389636181891515]
Deep learning models often require large amounts of data for training, leading to increased costs.
We present two data valuation metrics based on Synaptic Intelligence and gradient norms, respectively, to study redundancy in real-world image data.
Online and offline data selection algorithms are then proposed via clustering and grouping based on the examined data values.
arXiv Detail & Related papers (2023-06-25T03:31:05Z) - infoVerse: A Universal Framework for Dataset Characterization with
Multidimensional Meta-information [68.76707843019886]
infoVerse is a universal framework for dataset characterization.
infoVerse captures multidimensional characteristics of datasets by incorporating various model-driven meta-information.
In three real-world applications (data pruning, active learning, and data annotation), the samples chosen on infoVerse space consistently outperform strong baselines.
arXiv Detail & Related papers (2023-05-30T18:12:48Z) - M-Tuning: Prompt Tuning with Mitigated Label Bias in Open-Set Scenarios [103.6153593636399]
We propose a vision-language prompt tuning method with mitigated label bias (M-Tuning)
It introduces open words from the WordNet to extend the range of words forming the prompt texts from only closed-set label words to more, and thus prompts are tuned in a simulated open-set scenario.
Our method achieves the best performance on datasets with various scales, and extensive ablation studies also validate its effectiveness.
arXiv Detail & Related papers (2023-03-09T09:05:47Z) - Exploiting Diversity of Unlabeled Data for Label-Efficient
Semi-Supervised Active Learning [57.436224561482966]
Active learning is a research area that addresses the issues of expensive labeling by selecting the most important samples for labeling.
We introduce a new diversity-based initial dataset selection algorithm to select the most informative set of samples for initial labeling in the active learning setting.
Also, we propose a novel active learning query strategy, which uses diversity-based sampling on consistency-based embeddings.
arXiv Detail & Related papers (2022-07-25T16:11:55Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Online Coreset Selection for Rehearsal-based Continual Learning [65.85595842458882]
In continual learning, we store a subset of training examples (coreset) to be replayed later to alleviate catastrophic forgetting.
We propose Online Coreset Selection (OCS), a simple yet effective method that selects the most representative and informative coreset at each iteration.
Our proposed method maximizes the model's adaptation to a target dataset while selecting high-affinity samples to past tasks, which directly inhibits catastrophic forgetting.
arXiv Detail & Related papers (2021-06-02T11:39:25Z) - Contextual Diversity for Active Learning [9.546771465714876]
Large datasets restrict the use of deep convolutional neural networks (CNNs) for many practical applications.
We introduce the notion of contextual diversity that captures the confusion associated with spatially co-occurring classes.
Our studies show clear advantages of using contextual diversity for active learning.
arXiv Detail & Related papers (2020-08-13T07:04:15Z) - Reinforced active learning for image segmentation [34.096237671643145]
We present a new active learning strategy for semantic segmentation based on deep reinforcement learning (RL)
An agent learns a policy to select a subset of small informative image regions -- opposed to entire images -- to be labeled from a pool of unlabeled data.
Our method proposes a new modification of the deep Q-network (DQN) formulation for active learning, adapting it to the large-scale nature of semantic segmentation problems.
arXiv Detail & Related papers (2020-02-16T14:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.