Contextual Diversity for Active Learning
- URL: http://arxiv.org/abs/2008.05723v1
- Date: Thu, 13 Aug 2020 07:04:15 GMT
- Title: Contextual Diversity for Active Learning
- Authors: Sharat Agarwal and Himanshu Arora and Saket Anand and Chetan Arora
- Abstract summary: Large datasets restrict the use of deep convolutional neural networks (CNNs) for many practical applications.
We introduce the notion of contextual diversity that captures the confusion associated with spatially co-occurring classes.
Our studies show clear advantages of using contextual diversity for active learning.
- Score: 9.546771465714876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Requirement of large annotated datasets restrict the use of deep
convolutional neural networks (CNNs) for many practical applications. The
problem can be mitigated by using active learning (AL) techniques which, under
a given annotation budget, allow to select a subset of data that yields maximum
accuracy upon fine tuning. State of the art AL approaches typically rely on
measures of visual diversity or prediction uncertainty, which are unable to
effectively capture the variations in spatial context. On the other hand,
modern CNN architectures make heavy use of spatial context for achieving highly
accurate predictions. Since the context is difficult to evaluate in the absence
of ground-truth labels, we introduce the notion of contextual diversity that
captures the confusion associated with spatially co-occurring classes.
Contextual Diversity (CD) hinges on a crucial observation that the probability
vector predicted by a CNN for a region of interest typically contains
information from a larger receptive field. Exploiting this observation, we use
the proposed CD measure within two AL frameworks: (1) a core-set based strategy
and (2) a reinforcement learning based policy, for active frame selection. Our
extensive empirical evaluation establish state of the art results for active
learning on benchmark datasets of Semantic Segmentation, Object Detection and
Image Classification. Our ablation studies show clear advantages of using
contextual diversity for active learning. The source code and additional
results are available at https://github.com/sharat29ag/CDAL.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Contextrast: Contextual Contrastive Learning for Semantic Segmentation [9.051352746190448]
We propose Contextrast, a contrastive learning-based semantic segmentation method.
Our proposed method comprises two parts: a) contextual contrastive learning (CCL) and b) boundary-aware negative sampling.
We demonstrate that our Contextrast substantially enhances the performance of semantic segmentation networks.
arXiv Detail & Related papers (2024-04-16T15:04:55Z) - Supervised Gradual Machine Learning for Aspect Category Detection [0.9857683394266679]
Aspect Category Detection (ACD) aims to identify implicit and explicit aspects in a given review sentence.
We propose a novel approach to tackle the ACD task by combining Deep Neural Networks (DNNs) with Gradual Machine Learning (GML) in a supervised setting.
arXiv Detail & Related papers (2024-04-08T07:21:46Z) - Improving Deep Representation Learning via Auxiliary Learnable Target Coding [69.79343510578877]
This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
arXiv Detail & Related papers (2023-05-30T01:38:54Z) - Nearest Neighbor-Based Contrastive Learning for Hyperspectral and LiDAR
Data Classification [45.026868970899514]
We propose a Nearest Neighbor-based Contrastive Learning Network (NNCNet) to learn discriminative feature representations.
Specifically, we propose a nearest neighbor-based data augmentation scheme to use enhanced semantic relationships among nearby regions.
In addition, we design a bilinear attention module to exploit the second-order and even high-order feature interactions between the HSI and LiDAR data.
arXiv Detail & Related papers (2023-01-09T13:43:54Z) - Contextual information integration for stance detection via
cross-attention [59.662413798388485]
Stance detection deals with identifying an author's stance towards a target.
Most existing stance detection models are limited because they do not consider relevant contextual information.
We propose an approach to integrate contextual information as text.
arXiv Detail & Related papers (2022-11-03T15:04:29Z) - Voxel-wise Adversarial Semi-supervised Learning for Medical Image
Segmentation [4.489713477369384]
We introduce a novel adversarial learning-based semi-supervised segmentation method for medical image segmentation.
Our method embeds both local and global features from multiple hidden layers and learns context relations between multiple classes.
Our method outperforms current best-performing state-of-the-art semi-supervised learning approaches on the image segmentation of the left atrium (single class) and multiorgan datasets (multiclass)
arXiv Detail & Related papers (2022-05-14T06:57:19Z) - Dominant Set-based Active Learning for Text Classification and its
Application to Online Social Media [0.0]
We present a novel pool-based active learning method for the training of large unlabeled corpus with minimum annotation cost.
Our proposed method does not have any parameters to be tuned, making it dataset-independent.
Our method achieves a higher performance in comparison to the state-of-the-art active learning strategies.
arXiv Detail & Related papers (2022-01-28T19:19:03Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Context Decoupling Augmentation for Weakly Supervised Semantic
Segmentation [53.49821324597837]
Weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years.
We present a Context Decoupling Augmentation ( CDA) method to change the inherent context in which the objects appear.
To validate the effectiveness of the proposed method, extensive experiments on PASCAL VOC 2012 dataset with several alternative network architectures demonstrate that CDA can boost various popular WSSS methods to the new state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-03-02T15:05:09Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.