Overcoming Label Noise for Source-free Unsupervised Video Domain
Adaptation
- URL: http://arxiv.org/abs/2311.18572v1
- Date: Thu, 30 Nov 2023 14:06:27 GMT
- Title: Overcoming Label Noise for Source-free Unsupervised Video Domain
Adaptation
- Authors: Avijit Dasgupta and C. V. Jawahar and Karteek Alahari
- Abstract summary: We present a self-training based source-free video domain adaptation approach.
We use the source pre-trained model to generate pseudo-labels for the target domain samples.
We further enhance the adaptation performance by implementing a teacher-student framework.
- Score: 39.71690595469969
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the progress seen in classification methods, current approaches for
handling videos with distribution shifts in source and target domains remain
source-dependent as they require access to the source data during the
adaptation stage. In this paper, we present a self-training based source-free
video domain adaptation approach to address this challenge by bridging the gap
between the source and the target domains. We use the source pre-trained model
to generate pseudo-labels for the target domain samples, which are inevitably
noisy. Thus, we treat the problem of source-free video domain adaptation as
learning from noisy labels and argue that the samples with correct
pseudo-labels can help us in adaptation. To this end, we leverage the
cross-entropy loss as an indicator of the correctness of the pseudo-labels and
use the resulting small-loss samples from the target domain for fine-tuning the
model. We further enhance the adaptation performance by implementing a
teacher-student framework, in which the teacher, which is updated gradually,
produces reliable pseudo-labels. Meanwhile, the student undergoes fine-tuning
on the target domain videos using these generated pseudo-labels to improve its
performance. Extensive experimental evaluations show that our methods, termed
as CleanAdapt, CleanAdapt + TS, achieve state-of-the-art results, outperforming
the existing approaches on various open datasets. Our source code is publicly
available at https://avijit9.github.io/CleanAdapt.
Related papers
- De-Confusing Pseudo-Labels in Source-Free Domain Adaptation [14.954662088592762]
Source-free domain adaptation aims to adapt a source-trained model to an unlabeled target domain without access to the source data.
We introduce a novel noise-learning approach tailored to address noise distribution in domain adaptation settings.
arXiv Detail & Related papers (2024-01-03T10:07:11Z) - Unsupervised Domain Adaptation for Semantic Segmentation with Pseudo
Label Self-Refinement [9.69089112870202]
We propose an auxiliary pseudo-label refinement network (PRN) for online refining of the pseudo labels and also localizing the pixels whose predicted labels are likely to be noisy.
We evaluate our approach on benchmark datasets with three different domain shifts, and our approach consistently performs significantly better than the previous state-of-the-art methods.
arXiv Detail & Related papers (2023-10-25T20:31:07Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - Adaptive Pseudo-Label Refinement by Negative Ensemble Learning for
Source-Free Unsupervised Domain Adaptation [35.728603077621564]
Existing Unsupervised Domain Adaptation (UDA) methods presumes source and target domain data to be simultaneously available during training.
A pre-trained source model is always considered to be available, even though performing poorly on target due to the well-known domain shift problem.
We propose a unified method to tackle adaptive noise filtering and pseudo-label refinement.
arXiv Detail & Related papers (2021-03-29T22:18:34Z) - Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain
Adaptation [87.60688582088194]
We propose a novel Self-Supervised Noisy Label Learning method.
Our method can easily achieve state-of-the-art results and surpass other methods by a very large margin.
arXiv Detail & Related papers (2021-02-23T10:51:45Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.