Group-disentangled Representation Learning with Weakly-Supervised
Regularization
- URL: http://arxiv.org/abs/2110.12185v1
- Date: Sat, 23 Oct 2021 10:01:05 GMT
- Title: Group-disentangled Representation Learning with Weakly-Supervised
Regularization
- Authors: Linh Tran, Amir Hosein Khasahmadi, Aditya Sanghi, Saeid Asgari
- Abstract summary: GroupVAE is a simple yet effective Kullback-Leibler divergence-based regularization to enforce consistent and disentangled representations.
We demonstrate that learning group-disentangled representations improve upon downstream tasks, including fair classification and 3D shape-related tasks such as reconstruction, classification, and transfer learning.
- Score: 13.311886256230814
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Learning interpretable and human-controllable representations that uncover
factors of variation in data remains an ongoing key challenge in representation
learning. We investigate learning group-disentangled representations for groups
of factors with weak supervision. Existing techniques to address this challenge
merely constrain the approximate posterior by averaging over observations of a
shared group. As a result, observations with a common set of variations are
encoded to distinct latent representations, reducing their capacity to
disentangle and generalize to downstream tasks. In contrast to previous works,
we propose GroupVAE, a simple yet effective Kullback-Leibler (KL)
divergence-based regularization across shared latent representations to enforce
consistent and disentangled representations. We conduct a thorough evaluation
and demonstrate that our GroupVAE significantly improves group disentanglement.
Further, we demonstrate that learning group-disentangled representations
improve upon downstream tasks, including fair classification and 3D
shape-related tasks such as reconstruction, classification, and transfer
learning, and is competitive to supervised methods.
Related papers
- Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - InfoNCE Loss Provably Learns Cluster-Preserving Representations [54.28112623495274]
We show that the representation learned by InfoNCE with a finite number of negative samples is consistent with respect to clusters in the data.
Our main result is to show that the representation learned by InfoNCE with a finite number of negative samples is also consistent with respect to clusters in the data.
arXiv Detail & Related papers (2023-02-15T19:45:35Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - Joint Debiased Representation and Image Clustering Learning with
Self-Supervision [3.1806743741013657]
We develop a novel joint clustering and contrastive learning framework.
We adapt the debiased contrastive loss to avoid under-clustering minority classes of imbalanced datasets.
arXiv Detail & Related papers (2022-09-14T21:23:41Z) - Unsupervised Visual Representation Learning by Synchronous Momentum
Grouping [47.48803765951601]
Group-level contrastive visual representation learning method on ImageNet surpasses vanilla supervised learning.
We conduct exhaustive experiments to show that SMoG has surpassed the current SOTA unsupervised representation learning methods.
arXiv Detail & Related papers (2022-07-13T13:04:15Z) - Cycle-Balanced Representation Learning For Counterfactual Inference [42.229586802733806]
We propose a novel framework based on Cycle-Balanced REpresentation learning for counterfactual inference (CBRE)
Specifically, we realize a robust balanced representation for different groups using adversarial training, and meanwhile construct an information loop, such that preserve original data properties cyclically.
Results on three real-world datasets demonstrate that CBRE matches/outperforms the state-of-the-art methods, and it has a great potential to be applied to counterfactual inference.
arXiv Detail & Related papers (2021-10-29T01:15:16Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Representation Learning for Clustering via Building Consensus [3.7434090710577608]
We propose Consensus Clustering using Unsupervised Representation Learning (ConCURL)
ConCURL improves the clustering performance over state-of-the art methods on four out of five image datasets.
We extend the evaluation procedure for clustering to reflect the challenges in real world clustering tasks.
arXiv Detail & Related papers (2021-05-04T05:04:03Z) - Weakly-Supervised Disentanglement Without Compromises [53.55580957483103]
Intelligent agents should be able to learn useful representations by observing changes in their environment.
We model such observations as pairs of non-i.i.d. images sharing at least one of the underlying factors of variation.
We show that only knowing how many factors have changed, but not which ones, is sufficient to learn disentangled representations.
arXiv Detail & Related papers (2020-02-07T16:39:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.