Unsupervised Embedding Learning from Uncertainty Momentum Modeling
- URL: http://arxiv.org/abs/2107.08892v1
- Date: Mon, 19 Jul 2021 14:06:19 GMT
- Title: Unsupervised Embedding Learning from Uncertainty Momentum Modeling
- Authors: Jiahuan Zhou, Yansong Tang, Bing Su, Ying Wu
- Abstract summary: We propose a novel solution to explicitly model and explore the uncertainty of the given unlabeled learning samples.
We leverage such uncertainty modeling momentum to the learning which is helpful to tackle the outliers.
- Score: 37.674449317054716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing popular unsupervised embedding learning methods focus on enhancing
the instance-level local discrimination of the given unlabeled images by
exploring various negative data. However, the existed sample outliers which
exhibit large intra-class divergences or small inter-class variations severely
limit their learning performance. We justify that the performance limitation is
caused by the gradient vanishing on these sample outliers. Moreover, the
shortage of positive data and disregard for global discrimination consideration
also pose critical issues for unsupervised learning but are always ignored by
existing methods. To handle these issues, we propose a novel solution to
explicitly model and directly explore the uncertainty of the given unlabeled
learning samples. Instead of learning a deterministic feature point for each
sample in the embedding space, we propose to represent a sample by a stochastic
Gaussian with the mean vector depicting its space localization and covariance
vector representing the sample uncertainty. We leverage such uncertainty
modeling as momentum to the learning which is helpful to tackle the outliers.
Furthermore, abundant positive candidates can be readily drawn from the learned
instance-specific distributions which are further adopted to mitigate the
aforementioned issues. Thorough rationale analyses and extensive experiments
are presented to verify our superiority.
Related papers
- Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Adversarial Resilience in Sequential Prediction via Abstention [46.80218090768711]
We study the problem of sequential prediction in the setting with an adversary that is allowed to inject clean-label adversarial examples.
We propose a new model of sequential prediction that sits between the purely and fully adversarial settings.
arXiv Detail & Related papers (2023-06-22T17:44:22Z) - Adaptive Negative Evidential Deep Learning for Open-set Semi-supervised Learning [69.81438976273866]
Open-set semi-supervised learning (Open-set SSL) considers a more practical scenario, where unlabeled data and test data contain new categories (outliers) not observed in labeled data (inliers)
We introduce evidential deep learning (EDL) as an outlier detector to quantify different types of uncertainty, and design different uncertainty metrics for self-training and inference.
We propose a novel adaptive negative optimization strategy, making EDL more tailored to the unlabeled dataset containing both inliers and outliers.
arXiv Detail & Related papers (2023-03-21T09:07:15Z) - Poisson Reweighted Laplacian Uncertainty Sampling for Graph-based Active
Learning [1.6752182911522522]
We show that uncertainty sampling is sufficient to achieve exploration versus exploitation in graph-based active learning.
In particular, we use a recently developed algorithm, Poisson ReWeighted Laplace Learning (PWLL) for the classifier.
We present experimental results on a number of graph-based image classification problems.
arXiv Detail & Related papers (2022-10-27T22:07:53Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - A Sober Look at the Unsupervised Learning of Disentangled
Representations and their Evaluation [63.042651834453544]
We show that the unsupervised learning of disentangled representations is impossible without inductive biases on both the models and the data.
We observe that while the different methods successfully enforce properties "encouraged" by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision.
Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision.
arXiv Detail & Related papers (2020-10-27T10:17:15Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Mitigating Class Boundary Label Uncertainty to Reduce Both Model Bias
and Variance [4.563176550691304]
We investigate a new approach to handle inaccuracy and uncertainty in the training data labels.
Our method can reduce both bias and variance by estimating the pointwise label uncertainty of the training set.
arXiv Detail & Related papers (2020-02-23T18:24:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.