Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning
- URL: http://arxiv.org/abs/2211.14666v2
- Date: Tue, 6 Jun 2023 18:02:14 GMT
- Title: Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning
- Authors: S\'ebastien Lachapelle, Tristan Deleu, Divyat Mahajan, Ioannis
Mitliagkas, Yoshua Bengio, Simon Lacoste-Julien, Quentin Bertrand
- Abstract summary: We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
- Score: 79.83792914684985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although disentangled representations are often said to be beneficial for
downstream tasks, current empirical and theoretical understanding is limited.
In this work, we provide evidence that disentangled representations coupled
with sparse base-predictors improve generalization. In the context of
multi-task learning, we prove a new identifiability result that provides
conditions under which maximally sparse base-predictors yield disentangled
representations. Motivated by this theoretical result, we propose a practical
approach to learn disentangled representations based on a sparsity-promoting
bi-level optimization problem. Finally, we explore a meta-learning version of
this algorithm based on group Lasso multiclass SVM base-predictors, for which
we derive a tractable dual formulation. It obtains competitive results on
standard few-shot classification benchmarks, while each task is using only a
fraction of the learned representations.
Related papers
- Hierarchical Decomposition of Prompt-Based Continual Learning:
Rethinking Obscured Sub-optimality [55.88910947643436]
Self-supervised pre-training is essential for handling vast quantities of unlabeled data in practice.
HiDe-Prompt is an innovative approach that explicitly optimize the hierarchical components with an ensemble of task-specific prompts and statistics.
Our experiments demonstrate the superior performance of HiDe-Prompt and its robustness to pre-training paradigms in continual learning.
arXiv Detail & Related papers (2023-10-11T06:51:46Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - VC Theoretical Explanation of Double Descent [1.52292571922932]
This paper presents VC-theoretical analysis of double descent and shows that it can be fully explained by classical VC generalization bounds.
We illustrate an application of analytic VC-bounds for modeling double descent for classification problems, using empirical results for several learning methods.
arXiv Detail & Related papers (2022-05-31T05:50:02Z) - Explaining, Evaluating and Enhancing Neural Networks' Learned
Representations [2.1485350418225244]
We show how explainability can be an aid, rather than an obstacle, towards better and more efficient representations.
We employ such attributions to define two novel scores for evaluating the informativeness and the disentanglement of latent embeddings.
We show that adopting our proposed scores as constraints during the training of a representation learning task improves the downstream performance of the model.
arXiv Detail & Related papers (2022-02-18T19:00:01Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Efficient Iterative Amortized Inference for Learning Symmetric and
Disentangled Multi-Object Representations [8.163697683448811]
We introduce EfficientMORL, an efficient framework for the unsupervised learning of object-centric representations.
We show that optimization challenges caused by requiring both symmetry and disentanglement can be addressed by high-cost iterative amortized inference.
We demonstrate strong object decomposition and disentanglement on the standard multi-object benchmark while achieving nearly an order of magnitude faster training and test time inference.
arXiv Detail & Related papers (2021-06-07T14:02:49Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Improving Few-Shot Learning through Multi-task Representation Learning
Theory [14.8429503385929]
We consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task.
We show that recent advances in MTR theory can provide novel insights for popular meta-learning algorithms when analyzed within this framework.
This is the first contribution that puts the most recent learning bounds of MTR theory into practice for the task of few-shot classification.
arXiv Detail & Related papers (2020-10-05T13:24:43Z) - Functional Regularization for Representation Learning: A Unified
Theoretical Perspective [27.93916012334704]
Unsupervised and self-supervised learning approaches have become a crucial tool to learn representations for downstream prediction tasks.
We present a unifying perspective where several such approaches can be viewed as imposing a regularization on the representation via a learnable function using unlabeled data.
We propose a discriminative theoretical framework for analyzing the sample complexity of these approaches, which generalizes the framework of (Balcan and Blum, 2010) to allow learnable regularization functions.
arXiv Detail & Related papers (2020-08-06T04:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.