Multi-modal Instance Refinement for Cross-domain Action Recognition
- URL: http://arxiv.org/abs/2311.14281v1
- Date: Fri, 24 Nov 2023 05:06:28 GMT
- Title: Multi-modal Instance Refinement for Cross-domain Action Recognition
- Authors: Yuan Qing, Naixing Wu, Shaohua Wan, Lixin Duan
- Abstract summary: Unsupervised cross-domain action recognition aims at adapting the model trained on an existing labeled source domain to a new unlabeled target domain.
We propose a Multi-modal Instance Refinement (MMIR) method to alleviate the negative transfer based on reinforcement learning.
Our method finally outperforms several other state-of-the-art baselines in cross-domain action recognition on the benchmark EPIC-Kitchens dataset.
- Score: 25.734898762987083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised cross-domain action recognition aims at adapting the model
trained on an existing labeled source domain to a new unlabeled target domain.
Most existing methods solve the task by directly aligning the feature
distributions of source and target domains. However, this would cause negative
transfer during domain adaptation due to some negative training samples in both
domains. In the source domain, some training samples are of low-relevance to
target domain due to the difference in viewpoints, action styles, etc. In the
target domain, there are some ambiguous training samples that can be easily
classified as another type of action under the case of source domain. The
problem of negative transfer has been explored in cross-domain object
detection, while it remains under-explored in cross-domain action recognition.
Therefore, we propose a Multi-modal Instance Refinement (MMIR) method to
alleviate the negative transfer based on reinforcement learning. Specifically,
a reinforcement learning agent is trained in both domains for every modality to
refine the training data by selecting out negative samples from each domain.
Our method finally outperforms several other state-of-the-art baselines in
cross-domain action recognition on the benchmark EPIC-Kitchens dataset, which
demonstrates the advantage of MMIR in reducing negative transfer.
Related papers
- Contrastive Adversarial Training for Unsupervised Domain Adaptation [2.432037584128226]
Domain adversarial training has been successfully adopted for various domain adaptation tasks.
Large models make adversarial training being easily biased towards source domain and hardly adapted to target domain.
We propose contrastive adversarial training (CAT) approach that leverages the labeled source domain samples to reinforce and regulate the feature generation for target domain.
arXiv Detail & Related papers (2024-07-17T17:59:21Z) - Unsupervised Cross-Domain Rumor Detection with Contrastive Learning and
Cross-Attention [0.0]
Massive rumors usually appear along with breaking news or trending topics, seriously hindering the truth.
Existing rumor detection methods are mostly focused on the same domain, and thus have poor performance in cross-domain scenarios.
We propose an end-to-end instance-wise and prototype-wise contrastive learning model with a cross-attention mechanism for cross-domain rumor detection.
arXiv Detail & Related papers (2023-03-20T06:19:49Z) - Target Domain Data induces Negative Transfer in Mixed Domain Training
with Disjoint Classes [1.933681537640272]
In practical scenarios, it is often the case that the available training data within the target domain only exist for a limited number of classes.
We show that including the target domain in training when there exist disjoint classes between the target and surrogate domains creates significant negative transfer.
arXiv Detail & Related papers (2023-03-02T06:44:21Z) - Exploiting Instance-based Mixed Sampling via Auxiliary Source Domain
Supervision for Domain-adaptive Action Detection [75.38704117155909]
We propose a novel domain adaptive action detection approach and a new adaptation protocol.
Self-training combined with cross-domain mixed sampling has shown remarkable performance gain in UDA context.
We name our proposed framework as domain-adaptive action instance mixing (DA-AIM)
arXiv Detail & Related papers (2022-09-28T22:03:25Z) - Multilevel Knowledge Transfer for Cross-Domain Object Detection [26.105283273950942]
Domain shift is a well known problem where a model trained on a particular domain (source) does not perform well when exposed to samples from a different domain (target)
In this work, we address the domain shift problem for the object detection task.
Our approach relies on gradually removing the domain shift between the source and the target domains.
arXiv Detail & Related papers (2021-08-02T15:24:40Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z) - Unsupervised Domain Adaptive Object Detection using Forward-Backward
Cyclic Adaptation [13.163271874039191]
We present a novel approach to perform the unsupervised domain adaptation for object detection through forward-backward cyclic (FBC) training.
Recent adversarial training based domain adaptation methods have shown their effectiveness on minimizing domain discrepancy via marginal feature distributions alignment.
We propose Forward-Backward Cyclic Adaptation, which iteratively computes adaptation from source to target via backward hopping and from target to source via forward passing.
arXiv Detail & Related papers (2020-02-03T06:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.