Contrast and Mix: Temporal Contrastive Video Domain Adaptation with
Background Mixing
- URL: http://arxiv.org/abs/2110.15128v1
- Date: Thu, 28 Oct 2021 14:03:29 GMT
- Title: Contrast and Mix: Temporal Contrastive Video Domain Adaptation with
Background Mixing
- Authors: Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das
- Abstract summary: We introduce Contrast and Mix (CoMix), a new contrastive learning framework that aims to learn discriminative invariant feature representations for unsupervised video domain adaptation.
First, we utilize temporal contrastive learning to bridge the domain gap by maximizing the similarity between encoded representations of an unlabeled video at two different speeds.
Second, we propose a novel extension to the temporal contrastive loss by using background mixing that allows additional positives per anchor, thus adapting contrastive learning to leverage action semantics shared across both domains.
- Score: 55.73722120043086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation which aims to adapt models trained on a
labeled source domain to a completely unlabeled target domain has attracted
much attention in recent years. While many domain adaptation techniques have
been proposed for images, the problem of unsupervised domain adaptation in
videos remains largely underexplored. In this paper, we introduce Contrast and
Mix (CoMix), a new contrastive learning framework that aims to learn
discriminative invariant feature representations for unsupervised video domain
adaptation. First, unlike existing methods that rely on adversarial learning
for feature alignment, we utilize temporal contrastive learning to bridge the
domain gap by maximizing the similarity between encoded representations of an
unlabeled video at two different speeds as well as minimizing the similarity
between different videos played at different speeds. Second, we propose a novel
extension to the temporal contrastive loss by using background mixing that
allows additional positives per anchor, thus adapting contrastive learning to
leverage action semantics shared across both domains. Moreover, we also
integrate a supervised contrastive learning objective using target
pseudo-labels to enhance discriminability of the latent space for video domain
adaptation. Extensive experiments on several benchmark datasets demonstrate the
superiority of our proposed approach over state-of-the-art methods. Project
page: https://cvir.github.io/projects/comix
Related papers
- Contrastive Domain Adaptation for Time-Series via Temporal Mixup [14.723714504015483]
We propose a novel lightweight contrastive domain adaptation framework called CoTMix for time-series data.
Specifically, we propose a novel temporal mixup strategy to generate two intermediate augmented views for the source and target domains.
Our approach can significantly outperform all state-of-the-art UDA methods.
arXiv Detail & Related papers (2022-12-03T06:53:38Z) - PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain
Adaptative Semantic Segmentation [100.6343963798169]
Unsupervised Domain Adaptation (UDA) aims to enhance the generalization of the learned model to other domains.
We propose a unified pixel- and patch-wise self-supervised learning framework, called PiPa, for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2022-11-14T18:31:24Z) - Contrastive Domain Adaptation [4.822598110892847]
We propose to extend contrastive learning to a new domain adaptation setting.
Contrastive learning learns by comparing and contrasting positive and negative pairs of samples in an unsupervised setting.
We have developed a variation of a recently proposed contrastive learning framework that helps tackle the domain adaptation problem.
arXiv Detail & Related papers (2021-03-26T13:55:19Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Pixel-Level Cycle Association: A New Perspective for Domain Adaptive
Semantic Segmentation [169.82760468633236]
We propose to build the pixel-level cycle association between source and target pixel pairs.
Our method can be trained end-to-end in one stage and introduces no additional parameters.
arXiv Detail & Related papers (2020-10-31T00:11:36Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z) - Unsupervised Domain Adaptive Object Detection using Forward-Backward
Cyclic Adaptation [13.163271874039191]
We present a novel approach to perform the unsupervised domain adaptation for object detection through forward-backward cyclic (FBC) training.
Recent adversarial training based domain adaptation methods have shown their effectiveness on minimizing domain discrepancy via marginal feature distributions alignment.
We propose Forward-Backward Cyclic Adaptation, which iteratively computes adaptation from source to target via backward hopping and from target to source via forward passing.
arXiv Detail & Related papers (2020-02-03T06:24:58Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.