Not all domains are equally complex: Adaptive Multi-Domain Learning
- URL: http://arxiv.org/abs/2003.11504v1
- Date: Wed, 25 Mar 2020 17:16:00 GMT
- Title: Not all domains are equally complex: Adaptive Multi-Domain Learning
- Authors: Ali Senhaji, Jenni Raitoharju, Moncef Gabbouj and Alexandros Iosifidis
- Abstract summary: We propose an adaptive parameterization approach to deep neural networks for multi-domain learning.
The proposed approach performs on par with the original approach while reducing by far the number of parameters.
- Score: 98.25886129591974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning approaches are highly specialized and require training separate
models for different tasks. Multi-domain learning looks at ways to learn a
multitude of different tasks, each coming from a different domain, at once. The
most common approach in multi-domain learning is to form a domain agnostic
model, the parameters of which are shared among all domains, and learn a small
number of extra domain-specific parameters for each individual new domain.
However, different domains come with different levels of difficulty;
parameterizing the models of all domains using an augmented version of the
domain agnostic model leads to unnecessarily inefficient solutions, especially
for easy to solve tasks. We propose an adaptive parameterization approach to
deep neural networks for multi-domain learning. The proposed approach performs
on par with the original approach while reducing by far the number of
parameters, leading to efficient multi-domain learning solutions.
Related papers
- Multi-BERT: Leveraging Adapters and Prompt Tuning for Low-Resource Multi-Domain Adaptation [14.211024633768986]
The rapid expansion of texts' volume and diversity presents formidable challenges in multi-domain settings.
Traditional approaches, either employing a unified model for multiple domains or individual models for each domain, frequently pose significant limitations.
This paper introduces a novel approach composed of one core model with multiple sets of domain-specific parameters.
arXiv Detail & Related papers (2024-04-02T22:15:48Z) - Budget-Aware Pruning: Handling Multiple Domains with Less Parameters [43.26944909318156]
This work aims to prune models capable of handling multiple domains according to a user-defined budget.
We achieve this by encouraging all domains to use a similar subset of filters from the baseline model.
The proposed approach innovates by better adapting to resource-limited devices while being one of the few works that handles multiple domains at test time.
arXiv Detail & Related papers (2023-09-20T17:00:31Z) - Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations [80.76164484820818]
There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
arXiv Detail & Related papers (2022-10-25T21:54:26Z) - Budget-Aware Pruning for Multi-Domain Learning [45.84899283894373]
This work aims to prune models capable of handling multiple domains according to a user defined budget.
We achieve this by encouraging all domains to use a similar subset of filters from the baseline model.
The proposed approach innovates by better adapting to resource-limited devices.
arXiv Detail & Related papers (2022-10-14T20:48:12Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Boosting Binary Masks for Multi-Domain Learning through Affine
Transformations [49.25451497933657]
The goal of multi-domain learning is to produce a single model performing a task in all the domains together.
Recent works showed how we can address this problem by masking the internal weights of a given original conv-net through learned binary variables.
We provide a general formulation of binary mask based models for multi-domain learning by affine transformations of the original network parameters.
arXiv Detail & Related papers (2021-03-25T14:54:37Z) - Multi-path Neural Networks for On-device Multi-domain Visual
Classification [55.281139434736254]
This paper proposes a novel approach to automatically learn a multi-path network for multi-domain visual classification on mobile devices.
The proposed multi-path network is learned from neural architecture search by applying one reinforcement learning controller for each domain to select the best path in the super-network created from a MobileNetV3-like search space.
The determined multi-path model selectively shares parameters across domains in shared nodes while keeping domain-specific parameters within non-shared nodes in individual domain paths.
arXiv Detail & Related papers (2020-10-10T05:13:49Z) - Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization [78.93669377251396]
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
arXiv Detail & Related papers (2020-04-30T15:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.