Towards Robust Cross-domain Image Understanding with Unsupervised Noise
Removal
- URL: http://arxiv.org/abs/2109.04284v1
- Date: Thu, 9 Sep 2021 14:06:59 GMT
- Title: Towards Robust Cross-domain Image Understanding with Unsupervised Noise
Removal
- Authors: Lei Zhu, Zhaojing Luo, Wei Wang, Meihui Zhang, Gang Chen and Kaiping
Zheng
- Abstract summary: We find that contemporary domain adaptation methods for cross-domain image understanding perform poorly when source domain is noisy.
We propose a novel method, termed Noise Tolerant Domain Adaptation, for Weakly Supervised Domain Adaptation (WSDA)
We conduct extensive experiments to evaluate the effectiveness of our method on both general images and medical images from COVID-19 and e-commerce datasets.
- Score: 18.21213151403402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models usually require a large amount of labeled data to
achieve satisfactory performance. In multimedia analysis, domain adaptation
studies the problem of cross-domain knowledge transfer from a label rich source
domain to a label scarce target domain, thus potentially alleviates the
annotation requirement for deep learning models. However, we find that
contemporary domain adaptation methods for cross-domain image understanding
perform poorly when source domain is noisy. Weakly Supervised Domain Adaptation
(WSDA) studies the domain adaptation problem under the scenario where source
data can be noisy. Prior methods on WSDA remove noisy source data and align the
marginal distribution across domains without considering the fine-grained
semantic structure in the embedding space, which have the problem of class
misalignment, e.g., features of cats in the target domain might be mapped near
features of dogs in the source domain. In this paper, we propose a novel
method, termed Noise Tolerant Domain Adaptation, for WSDA. Specifically, we
adopt the cluster assumption and learn cluster discriminatively with class
prototypes in the embedding space. We propose to leverage the location
information of the data points in the embedding space and model the location
information with a Gaussian mixture model to identify noisy source data. We
then design a network which incorporates the Gaussian mixture noise model as a
sub-module for unsupervised noise removal and propose a novel cluster-level
adversarial adaptation method which aligns unlabeled target data with the less
noisy class prototypes for mapping the semantic structure across domains. We
conduct extensive experiments to evaluate the effectiveness of our method on
both general images and medical images from COVID-19 and e-commerce datasets.
The results show that our method significantly outperforms state-of-the-art
WSDA methods.
Related papers
- Trust your Good Friends: Source-free Domain Adaptation by Reciprocal
Neighborhood Clustering [50.46892302138662]
We address the source-free domain adaptation problem, where the source pretrained model is adapted to the target domain in the absence of source data.
Our method is based on the observation that target data, which might not align with the source domain classifier, still forms clear clusters.
We demonstrate that this local structure can be efficiently captured by considering the local neighbors, the reciprocal neighbors, and the expanded neighborhood.
arXiv Detail & Related papers (2023-09-01T15:31:18Z) - Noisy Universal Domain Adaptation via Divergence Optimization for Visual
Recognition [30.31153237003218]
A novel scenario named Noisy UniDA is proposed to transfer knowledge from a labeled source domain to an unlabeled target domain.
A multi-head convolutional neural network framework is proposed to address all of the challenges faced in the Noisy UniDA at once.
arXiv Detail & Related papers (2023-04-20T14:18:38Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - ProxyMix: Proxy-based Mixup Training with Label Refinery for Source-Free
Domain Adaptation [73.14508297140652]
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an effective method named Proxy-based Mixup training with label refinery ( ProxyMix)
Experiments on three 2D image and one 3D point cloud object recognition benchmarks demonstrate that ProxyMix yields state-of-the-art performance for source-free UDA tasks.
arXiv Detail & Related papers (2022-05-29T03:45:00Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Exploiting the Intrinsic Neighborhood Structure for Source-free Domain
Adaptation [47.907168218249694]
We address the source-free domain adaptation problem, where the source pretrained model is adapted to the target domain in the absence of source data.
We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity.
We demonstrate that this local structure can be efficiently captured by considering the local neighbors, the reciprocal neighbors, and the expanded neighborhood.
arXiv Detail & Related papers (2021-10-08T15:40:18Z) - Divergence Optimization for Noisy Universal Domain Adaptation [32.05829135903389]
Universal domain adaptation (UniDA) has been proposed to transfer knowledge learned from a label-rich source domain to a label-scarce target domain.
This paper introduces a two-head convolutional neural network framework to solve all problems simultaneously.
arXiv Detail & Related papers (2021-04-01T04:16:04Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.