Conditional Extreme Value Theory for Open Set Video Domain Adaptation
- URL: http://arxiv.org/abs/2109.00522v1
- Date: Wed, 1 Sep 2021 10:51:50 GMT
- Title: Conditional Extreme Value Theory for Open Set Video Domain Adaptation
- Authors: Zhuoxiao Chen, Yadan Luo, Mahsa Baktashmotlagh
- Abstract summary: We propose an open-set video domain adaptation approach to mitigate the domain discrepancy between the source and target data.
To alleviate the negative transfer issue, weights computed by the distance from the sample entropy to the threshold are leveraged in adversarial learning.
The proposed method has been thoroughly evaluated on both small-scale and large-scale cross-domain video datasets.
- Score: 17.474956295874797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the advent of media streaming, video action recognition has become
progressively important for various applications, yet at the high expense of
requiring large-scale data labelling. To overcome the problem of expensive data
labelling, domain adaptation techniques have been proposed that transfers
knowledge from fully labelled data (i.e., source domain) to unlabelled data
(i.e., target domain). The majority of video domain adaptation algorithms are
proposed for closed-set scenarios in which all the classes are shared among the
domains. In this work, we propose an open-set video domain adaptation approach
to mitigate the domain discrepancy between the source and target data, allowing
the target data to contain additional classes that do not belong to the source
domain. Different from previous works, which only focus on improving accuracy
for shared classes, we aim to jointly enhance the alignment of shared classes
and recognition of unknown samples. Towards this goal, class-conditional
extreme value theory is applied to enhance the unknown recognition.
Specifically, the entropy values of target samples are modelled as generalised
extreme value distributions, which allows separating unknown samples lying in
the tail of the distribution. To alleviate the negative transfer issue, weights
computed by the distance from the sample entropy to the threshold are leveraged
in adversarial learning in the sense that confident source and target samples
are aligned, and unconfident samples are pushed away. The proposed method has
been thoroughly evaluated on both small-scale and large-scale cross-domain
video datasets and achieved the state-of-the-art performance.
Related papers
- Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Progressive Graph Learning for Open-Set Domain Adaptation [48.758366879597965]
Domain shift is a fundamental problem in visual recognition which typically arises when the source and target data follow different distributions.
In this paper, we tackle a more realistic problem of open-set domain shift where the target data contains additional classes that are not present in the source data.
We introduce an end-to-end Progressive Graph Learning framework where a graph neural network with episodic training is integrated to suppress underlying conditional shift.
arXiv Detail & Related papers (2020-06-22T09:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.