Hub-VAE: Unsupervised Hub-based Regularization of Variational
Autoencoders
- URL: http://arxiv.org/abs/2211.10469v1
- Date: Fri, 18 Nov 2022 19:12:15 GMT
- Title: Hub-VAE: Unsupervised Hub-based Regularization of Variational
Autoencoders
- Authors: Priya Mani and Carlotta Domeniconi
- Abstract summary: We propose an unsupervised, data-driven regularization of the latent space with a mixture of hub-based priors and a hub-based contrastive loss.
Our algorithm achieves superior cluster separability in the embedding space, and accurate data reconstruction and generation.
- Score: 11.252245456934348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exemplar-based methods rely on informative data points or prototypes to guide
the optimization of learning algorithms. Such data facilitate interpretable
model design and prediction. Of particular interest is the utility of exemplars
in learning unsupervised deep representations. In this paper, we leverage hubs,
which emerge as frequent neighbors in high-dimensional spaces, as exemplars to
regularize a variational autoencoder and to learn a discriminative embedding
for unsupervised down-stream tasks. We propose an unsupervised, data-driven
regularization of the latent space with a mixture of hub-based priors and a
hub-based contrastive loss. Experimental evaluation shows that our algorithm
achieves superior cluster separability in the embedding space, and accurate
data reconstruction and generation, compared to baselines and state-of-the-art
techniques.
Related papers
- Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Learning A Disentangling Representation For PU Learning [18.94726971543125]
We propose to learn a neural network-based data representation using a loss function that can be used to project the unlabeled data into two clusters.
We conduct experiments on simulated PU data that demonstrate the improved performance of our proposed method compared to the current state-of-the-art approaches.
arXiv Detail & Related papers (2023-10-05T18:33:32Z) - Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly
Detection with Scale Learning [11.245813423781415]
We devise novel data-driven supervision for data by introducing a characteristic -- scale -- as data labels.
Scales serve as labels attached to transformed representations, thus offering ample labeled data for neural network training.
This paper further proposes a scale learning-based anomaly detection method.
arXiv Detail & Related papers (2023-05-25T14:48:00Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - Clustering augmented Self-Supervised Learning: Anapplication to Land
Cover Mapping [10.720852987343896]
We introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning.
We demonstrate the effectiveness of the method on two societally relevant applications.
arXiv Detail & Related papers (2021-08-16T19:35:43Z) - Graph Constrained Data Representation Learning for Human Motion
Segmentation [14.611777974037194]
We propose a novel unsupervised model that learns a representation of the data and digs clustering information from the data itself.
Experimental results on four benchmark datasets for HMS demonstrate that our approach achieves significantly better clustering performance then state-of-the-art methods.
arXiv Detail & Related papers (2021-07-28T13:49:16Z) - Towards Uncovering the Intrinsic Data Structures for Unsupervised Domain
Adaptation using Structurally Regularized Deep Clustering [119.88565565454378]
Unsupervised domain adaptation (UDA) is to learn classification models that make predictions for unlabeled data on a target domain.
We propose a hybrid model of Structurally Regularized Deep Clustering, which integrates the regularized discriminative clustering of target data with a generative one.
Our proposed H-SRDC outperforms all the existing methods under both the inductive and transductive settings.
arXiv Detail & Related papers (2020-12-08T08:52:00Z) - Cluster-level Feature Alignment for Person Re-identification [16.01713931617725]
This paper probes another feature alignment modality, namely cluster-level feature alignment across whole dataset.
We propose anchor loss and investigate many variants of cluster-level feature alignment, which consists of iterative aggregation and alignment from overview of dataset.
arXiv Detail & Related papers (2020-08-15T23:47:47Z) - BREEDS: Benchmarks for Subpopulation Shift [98.90314444545204]
We develop a methodology for assessing the robustness of models to subpopulation shift.
We leverage the class structure underlying existing datasets to control the data subpopulations that comprise the training and test distributions.
Applying this methodology to the ImageNet dataset, we create a suite of subpopulation shift benchmarks of varying granularity.
arXiv Detail & Related papers (2020-08-11T17:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.