Disentangled Latent Spaces Facilitate Data-Driven Auxiliary Learning
- URL: http://arxiv.org/abs/2310.09278v1
- Date: Fri, 13 Oct 2023 17:40:39 GMT
- Title: Disentangled Latent Spaces Facilitate Data-Driven Auxiliary Learning
- Authors: Geri Skenderi, Luigi Capogrosso, Andrea Toaiari, Matteo Denitto,
Franco Fummi, Simone Melzi, Marco Cristani
- Abstract summary: In deep learning, auxiliary objectives are often used to facilitate learning in situations where data is scarce.
We propose a novel framework, dubbed Detaux, whereby a weakly supervised disentanglement procedure is used to discover new unrelated classification tasks.
- Score: 15.41342100228504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In deep learning, auxiliary objectives are often used to facilitate learning
in situations where data is scarce, or the principal task is extremely complex.
This idea is primarily inspired by the improved generalization capability
induced by solving multiple tasks simultaneously, which leads to a more robust
shared representation. Nevertheless, finding optimal auxiliary tasks that give
rise to the desired improvement is a crucial problem that often requires
hand-crafted solutions or expensive meta-learning approaches. In this paper, we
propose a novel framework, dubbed Detaux, whereby a weakly supervised
disentanglement procedure is used to discover new unrelated classification
tasks and the associated labels that can be exploited with the principal task
in any Multi-Task Learning (MTL) model. The disentanglement procedure works at
a representation level, isolating a subspace related to the principal task,
plus an arbitrary number of orthogonal subspaces. In the most disentangled
subspaces, through a clustering procedure, we generate the additional
classification tasks, and the associated labels become their representatives.
Subsequently, the original data, the labels associated with the principal task,
and the newly discovered ones can be fed into any MTL framework. Extensive
validation on both synthetic and real data, along with various ablation
studies, demonstrate promising results, revealing the potential in what has
been, so far, an unexplored connection between learning disentangled
representations and MTL. The code will be made publicly available upon
acceptance.
Related papers
- Combining Supervised Learning and Reinforcement Learning for Multi-Label Classification Tasks with Partial Labels [27.53399899573121]
We propose an RL-based framework combining the exploration ability of reinforcement learning and the exploitation ability of supervised learning.
Experimental results across various tasks, including document-level relation extraction, demonstrate the generalization and effectiveness of our framework.
arXiv Detail & Related papers (2024-06-24T03:36:19Z) - Mitigating Interference in the Knowledge Continuum through Attention-Guided Incremental Learning [17.236861687708096]
Attention-Guided Incremental Learning' (AGILE) is a rehearsal-based CL approach that incorporates compact task attention to effectively reduce interference between tasks.
AGILE significantly improves generalization performance by mitigating task interference and outperforming rehearsal-based approaches in several CL scenarios.
arXiv Detail & Related papers (2024-05-22T20:29:15Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Generalization with Lossy Affordances: Leveraging Broad Offline Data for
Learning Visuomotor Tasks [65.23947618404046]
We introduce a framework that acquires goal-conditioned policies for unseen temporally extended tasks via offline reinforcement learning on broad data.
When faced with a novel task goal, the framework uses an affordance model to plan a sequence of lossy representations as subgoals that decomposes the original task into easier problems.
We show that our framework can be pre-trained on large-scale datasets of robot experiences from prior work and efficiently fine-tuned for novel tasks, entirely from visual inputs without any manual reward engineering.
arXiv Detail & Related papers (2022-10-12T21:46:38Z) - A Multi-label Continual Learning Framework to Scale Deep Learning
Approaches for Packaging Equipment Monitoring [57.5099555438223]
We study multi-label classification in the continual scenario for the first time.
We propose an efficient approach that has a logarithmic complexity with regard to the number of tasks.
We validate our approach on a real-world multi-label Forecasting problem from the packaging industry.
arXiv Detail & Related papers (2022-08-08T15:58:39Z) - Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big
Task [24.618094251341958]
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.
We propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks.
Our experimental results demonstrate its effectiveness in comparison with the state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-07T02:46:47Z) - Transfer Learning in Conversational Analysis through Reusing
Preprocessing Data as Supervisors [52.37504333689262]
Using noisy labels in single-task learning increases the risk of over-fitting.
Auxiliary tasks could improve the performance of the primary task learning during the same training.
arXiv Detail & Related papers (2021-12-02T08:40:42Z) - Active Refinement for Multi-Label Learning: A Pseudo-Label Approach [84.52793080276048]
Multi-label learning (MLL) aims to associate a given instance with its relevant labels from a set of concepts.
Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed.
Many real-world applications require introducing new concepts into the set to meet new demands.
arXiv Detail & Related papers (2021-09-29T19:17:05Z) - Few-Shot Unsupervised Continual Learning through Meta-Examples [21.954394608030388]
We introduce a novel and complex setting involving unsupervised meta-continual learning with unbalanced tasks.
We exploit a meta-learning scheme that simultaneously alleviates catastrophic forgetting and favors the generalization to new tasks.
Experimental results on few-shot learning benchmarks show competitive performance even compared to the supervised case.
arXiv Detail & Related papers (2020-09-17T07:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.