Deep Active Learning with Augmentation-based Consistency Estimation
- URL: http://arxiv.org/abs/2011.02666v1
- Date: Thu, 5 Nov 2020 05:22:58 GMT
- Title: Deep Active Learning with Augmentation-based Consistency Estimation
- Authors: SeulGi Hong, Heonjin Ha, Junmo Kim, Min-Kook Choi
- Abstract summary: We propose a methodology to improve generalization ability, by applying data augmentation-based techniques to an active learning scenario.
For the data augmentation-based regularization loss, we redefined cutout (co) and cutmix (cm) strategies as quantitative metrics.
We have shown that the augmentation-based regularizer can lead to improved performance on the training step of active learning.
- Score: 23.492616938184092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In active learning, the focus is mainly on the selection strategy of
unlabeled data for enhancing the generalization capability of the next learning
cycle. For this, various uncertainty measurement methods have been proposed. On
the other hand, with the advent of data augmentation metrics as the regularizer
on general deep learning, we notice that there can be a mutual influence
between the method of unlabeled data selection and the data augmentation-based
regularization techniques in active learning scenarios. Through various
experiments, we confirmed that consistency-based regularization from analytical
learning theory could affect the generalization capability of the classifier in
combination with the existing uncertainty measurement method. By this fact, we
propose a methodology to improve generalization ability, by applying data
augmentation-based techniques to an active learning scenario. For the data
augmentation-based regularization loss, we redefined cutout (co) and cutmix
(cm) strategies as quantitative metrics and applied at both model training and
unlabeled data selection steps. We have shown that the augmentation-based
regularizer can lead to improved performance on the training step of active
learning, while that same approach can be effectively combined with the
uncertainty measurement metrics proposed so far. We used datasets such as
FashionMNIST, CIFAR10, CIFAR100, and STL10 to verify the performance of the
proposed active learning technique for multiple image classification tasks. Our
experiments show consistent performance gains for each dataset and budget
scenario.
Related papers
- Bridging Diversity and Uncertainty in Active learning with
Self-Supervised Pre-Training [23.573986817769025]
This study addresses the integration of diversity-based and uncertainty-based sampling strategies in active learning.
We introduce a straightforward called TCM that mitigates the cold start problem while maintaining strong performance across various data levels.
arXiv Detail & Related papers (2024-03-06T14:18:24Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - Dataset Condensation with Contrastive Signals [41.195453119305746]
gradient matching-based dataset synthesis (DC) methods can achieve state-of-the-art performance when applied to data-efficient learning tasks.
In this study, we prove that the existing DC methods can perform worse than the random selection method when task-irrelevant information forms a significant part of the training dataset.
We propose dataset condensation with Contrastive signals (DCC) by modifying the loss function to enable the DC methods to effectively capture the differences between classes.
arXiv Detail & Related papers (2022-02-07T03:05:32Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Generalization of Reinforcement Learning with Policy-Aware Adversarial
Data Augmentation [32.70482982044965]
We propose a novel policy-aware adversarial data augmentation method to augment the standard policy learning method with automatically generated trajectory data.
We conduct experiments on a number of RL tasks to investigate the generalization performance of the proposed method.
The results show our method can generalize well with limited training diversity, and achieve the state-of-the-art generalization test performance.
arXiv Detail & Related papers (2021-06-29T17:21:59Z) - On Data Efficiency of Meta-learning [17.739215706060605]
We study the often overlooked aspect of the modern meta-learning algorithms -- their data efficiency.
We introduce a new simple framework for evaluating meta-learning methods under a limit on the available supervision.
We propose active meta-learning, which incorporates active data selection into learning-to-learn, leading to better performance of all methods in the limited supervision regime.
arXiv Detail & Related papers (2021-01-30T01:44:12Z) - Ask-n-Learn: Active Learning via Reliable Gradient Representations for
Image Classification [29.43017692274488]
Deep predictive models rely on human supervision in the form of labeled training data.
We propose Ask-n-Learn, an active learning approach based on gradient embeddings obtained using the pesudo-labels estimated in each of the algorithm.
arXiv Detail & Related papers (2020-09-30T05:19:56Z) - Automatic Data Augmentation via Deep Reinforcement Learning for
Effective Kidney Tumor Segmentation [57.78765460295249]
We develop a novel automatic learning-based data augmentation method for medical image segmentation.
In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss.
We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
arXiv Detail & Related papers (2020-02-22T14:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.