Multi-Modal Cross-Domain Alignment Network for Video Moment Retrieval
- URL: http://arxiv.org/abs/2209.11572v1
- Date: Fri, 23 Sep 2022 12:58:20 GMT
- Title: Multi-Modal Cross-Domain Alignment Network for Video Moment Retrieval
- Authors: Xiang Fang, Daizong Liu, Pan Zhou, YuChong Hu
- Abstract summary: Video moment retrieval (VMR) aims to localize the target moment from an untrimmed video according to a given language query.
In this paper, we focus on a novel task: cross-domain VMR, where fully-annotated datasets are available in one domain but the domain of interest only contains unannotated datasets.
We propose a novel Multi-Modal Cross-Domain Alignment network to transfer the annotation knowledge from the source domain to the target domain.
- Score: 55.122020263319634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an increasingly popular task in multimedia information retrieval, video
moment retrieval (VMR) aims to localize the target moment from an untrimmed
video according to a given language query. Most previous methods depend heavily
on numerous manual annotations (i.e., moment boundaries), which are extremely
expensive to acquire in practice. In addition, due to the domain gap between
different datasets, directly applying these pre-trained models to an unseen
domain leads to a significant performance drop. In this paper, we focus on a
novel task: cross-domain VMR, where fully-annotated datasets are available in
one domain (``source domain''), but the domain of interest (``target domain'')
only contains unannotated datasets. As far as we know, we present the first
study on cross-domain VMR. To address this new task, we propose a novel
Multi-Modal Cross-Domain Alignment (MMCDA) network to transfer the annotation
knowledge from the source domain to the target domain. However, due to the
domain discrepancy between the source and target domains and the semantic gap
between videos and queries, directly applying trained models to the target
domain generally leads to a performance drop. To solve this problem, we develop
three novel modules: (i) a domain alignment module is designed to align the
feature distributions between different domains of each modality; (ii) a
cross-modal alignment module aims to map both video and query features into a
joint embedding space and to align the feature distributions between different
modalities in the target domain; (iii) a specific alignment module tries to
obtain the fine-grained similarity between a specific frame and the given query
for optimal localization. By jointly training these three modules, our MMCDA
can learn domain-invariant and semantic-aligned cross-modal representations.
Related papers
- ML-BPM: Multi-teacher Learning with Bidirectional Photometric Mixing for
Open Compound Domain Adaptation in Semantic Segmentation [78.19743899703052]
Open compound domain adaptation (OCDA) considers the target domain as the compound of multiple unknown homogeneous.
We introduce a multi-teacher framework with bidirectional photometric mixing to adapt to every target subdomain.
We conduct an adaptive distillation to learn a student model and apply consistency regularization to improve the student generalization.
arXiv Detail & Related papers (2022-07-19T03:30:48Z) - Domain Attention Consistency for Multi-Source Domain Adaptation [100.25573559447551]
Key design is a feature channel attention module, which aims to identify transferable features (attributes)
Experiments on three MSDA benchmarks show that our DAC-Net achieves new state of the art performance on all of them.
arXiv Detail & Related papers (2021-11-06T15:56:53Z) - VDM-DA: Virtual Domain Modeling for Source Data-free Domain Adaptation [26.959377850768423]
Domain adaptation aims to leverage a label-rich domain (the source domain) to help model learning in a label-scarce domain (the target domain)
Access to the source domain samples may not always be feasible in the real world applications due to different problems.
We propose a novel approach referred to as Virtual Domain Modeling (VDM-DA)
arXiv Detail & Related papers (2021-03-26T09:56:40Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.