Differentiable Weight Masks for Domain Transfer
- URL: http://arxiv.org/abs/2308.13957v2
- Date: Sat, 7 Oct 2023 04:52:15 GMT
- Title: Differentiable Weight Masks for Domain Transfer
- Authors: Samar Khanna, Skanda Vaidyanath, Akash Velu
- Abstract summary: One of the major drawbacks of deep learning models for computer vision has been their inability to retain multiple sources of information in a modular fashion.
We study three such weight masking methods to analyse their ability to mitigate "forgetting" on the source task.
We find that different masking techniques have trade-offs in retaining knowledge in the source task without adversely affecting target task performance.
- Score: 2.008400316189417
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the major drawbacks of deep learning models for computer vision has
been their inability to retain multiple sources of information in a modular
fashion. For instance, given a network that has been trained on a source task,
we would like to re-train this network on a similar, yet different, target task
while maintaining its performance on the source task. Simultaneously,
researchers have extensively studied modularization of network weights to
localize and identify the set of weights culpable for eliciting the observed
performance on a given task. One set of works studies the modularization
induced in the weights of a neural network by learning and analysing weight
masks. In this work, we combine these fields to study three such weight masking
methods and analyse their ability to mitigate "forgetting'' on the source task
while also allowing for efficient finetuning on the target task. We find that
different masking techniques have trade-offs in retaining knowledge in the
source task without adversely affecting target task performance.
Related papers
- Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Self-Supervised Visual Representation Learning Using Lightweight
Architectures [0.0]
In self-supervised learning, a model is trained to solve a pretext task, using a data set whose annotations are created by a machine.
We critically examine the most notable pretext tasks to extract features from image data.
We study the performance of various self-supervised techniques keeping all other parameters uniform.
arXiv Detail & Related papers (2021-10-21T14:13:10Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Deep Learning for Automatic Quality Grading of Mangoes: Methods and
Insights [1.0742675209112622]
The paper approaches the grading task with various convolutional neural networks (CNN), a tried-and-tested deep learning technology in computer vision.
The models involved include Mask R-CNN (for background removal), the numerous past winners of the ImageNet challenge, namely AlexNet, VGGs, and ResNets.
The paper provides explainable insights into the model's working with the help of saliency maps and principal component analysis (PCA)
arXiv Detail & Related papers (2020-11-23T13:09:47Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - Learning Context-aware Task Reasoning for Efficient Meta-reinforcement
Learning [29.125234093368732]
We propose a novel meta-RL strategy to achieve human-level efficiency in learning novel tasks.
We decompose the meta-RL problem into three sub-tasks, task-exploration, task-inference and task-fulfillment.
Our algorithm effectively performs exploration for task inference, improves sample efficiency during both training and testing, and mitigates the meta-overfitting problem.
arXiv Detail & Related papers (2020-03-03T07:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.