Model-agnostic Multi-Domain Learning with Domain-Specific Adapters for
Action Recognition
- URL: http://arxiv.org/abs/2204.07270v1
- Date: Fri, 15 Apr 2022 00:02:13 GMT
- Title: Model-agnostic Multi-Domain Learning with Domain-Specific Adapters for
Action Recognition
- Authors: Kazuki Omi, Toru Tamaki
- Abstract summary: The proposed method inserts domain-specific adapters between layers of domain-independent layers of a backbone network.
Unlike a multi-head network that switches classification heads only, our model switches not only the heads, but also the adapters for facilitating to learn feature representations universal to multiple domains.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a multi-domain learning model for action
recognition. The proposed method inserts domain-specific adapters between
layers of domain-independent layers of a backbone network. Unlike a multi-head
network that switches classification heads only, our model switches not only
the heads, but also the adapters for facilitating to learn feature
representations universal to multiple domains. Unlike prior works, the proposed
method is model-agnostic and doesn't assume model structures unlike prior
works. Experimental results on three popular action recognition datasets
(HMDB51, UCF101, and Kinetics-400) demonstrate that the proposed method is more
effective than a multi-head architecture and more efficient than separately
training models for each domain.
Related papers
- Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - Leveraging Normalization Layer in Adapters With Progressive Learning and
Adaptive Distillation for Cross-Domain Few-Shot Learning [27.757318834190443]
Cross-domain few-shot learning presents a formidable challenge, as models must be trained on base classes and tested on novel classes from various domains with only a few samples at hand.
We introduce a novel generic framework that leverages normalization layer in adapters with Progressive Learning and Adaptive Distillation (ProLAD)
We deploy two strategies: a progressive training of the two adapters and an adaptive distillation technique derived from features determined by the model solely with the adapter devoid of a normalization layer.
arXiv Detail & Related papers (2023-12-18T15:02:14Z) - Adaptive Parametric Prototype Learning for Cross-Domain Few-Shot
Classification [23.82751179819225]
We develop a novel Adaptive Parametric Prototype Learning (APPL) method under the meta-learning convention for cross-domain few-shot classification.
APPL yields superior performance than many state-of-the-art cross-domain few-shot learning methods.
arXiv Detail & Related papers (2023-09-04T03:58:50Z) - Multi-Domain Learning with Modulation Adapters [33.54630534228469]
Multi-domain learning aims to handle related tasks, such as image classification across multiple domains, simultaneously.
Modulation Adapters update the convolutional weights of the model in a multiplicative manner for each task.
Our approach yields excellent results, with accuracies that are comparable to or better than those of existing state-of-the-art approaches.
arXiv Detail & Related papers (2023-07-17T14:40:16Z) - Multi-Target Domain Adaptation with Collaborative Consistency Learning [105.7615147382486]
We propose a collaborative learning framework to achieve unsupervised multi-target domain adaptation.
The proposed method can effectively exploit rich structured information contained in both labeled source domain and multiple unlabeled target domains.
arXiv Detail & Related papers (2021-06-07T08:36:20Z) - Universal Representation Learning from Multiple Domains for Few-shot
Classification [41.821234589075445]
We propose to learn a single set of universal deep representations by distilling knowledge of multiple separately trained networks.
We show that the universal representations can be further refined for previously unseen domains by an efficient adaptation step.
arXiv Detail & Related papers (2021-03-25T13:49:12Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - Multi-path Neural Networks for On-device Multi-domain Visual
Classification [55.281139434736254]
This paper proposes a novel approach to automatically learn a multi-path network for multi-domain visual classification on mobile devices.
The proposed multi-path network is learned from neural architecture search by applying one reinforcement learning controller for each domain to select the best path in the super-network created from a MobileNetV3-like search space.
The determined multi-path model selectively shares parameters across domains in shared nodes while keeping domain-specific parameters within non-shared nodes in individual domain paths.
arXiv Detail & Related papers (2020-10-10T05:13:49Z) - Class-Incremental Domain Adaptation [56.72064953133832]
We introduce a practical Domain Adaptation (DA) paradigm called Class-Incremental Domain Adaptation (CIDA)
Existing DA methods tackle domain-shift but are unsuitable for learning novel target-domain classes.
Our approach yields superior performance as compared to both DA and CI methods in the CIDA paradigm.
arXiv Detail & Related papers (2020-08-04T07:55:03Z) - Cross-domain Face Presentation Attack Detection via Multi-domain
Disentangled Representation Learning [109.42987031347582]
Face presentation attack detection (PAD) has been an urgent problem to be solved in the face recognition systems.
We propose an efficient disentangled representation learning for cross-domain face PAD.
Our approach consists of disentangled representation learning (DR-Net) and multi-domain learning (MD-Net)
arXiv Detail & Related papers (2020-04-04T15:45:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.