Towards Robust Adaptive Object Detection under Noisy Annotations
- URL: http://arxiv.org/abs/2204.02620v1
- Date: Wed, 6 Apr 2022 07:02:37 GMT
- Title: Towards Robust Adaptive Object Detection under Noisy Annotations
- Authors: Xinyu Liu, Wuyang Li, Qiushi Yang, Baopu Li, Yixuan Yuan
- Abstract summary: Existing methods assume that the source domain labels are completely clean, yet large-scale datasets often contain error-prone annotations due to instance ambiguity.
We propose a Noise Latent Transferability Exploration framework to address this issue.
NLTE improves the mAP by 8.4% under 60% corrupted annotations and even approaches the ideal upper bound of training on a clean source dataset.
- Score: 40.25050610617893
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain Adaptive Object Detection (DAOD) models a joint distribution of images
and labels from an annotated source domain and learns a domain-invariant
transformation to estimate the target labels with the given target domain
images. Existing methods assume that the source domain labels are completely
clean, yet large-scale datasets often contain error-prone annotations due to
instance ambiguity, which may lead to a biased source distribution and severely
degrade the performance of the domain adaptive detector de facto. In this
paper, we represent the first effort to formulate noisy DAOD and propose a
Noise Latent Transferability Exploration (NLTE) framework to address this
issue. It is featured with 1) Potential Instance Mining (PIM), which leverages
eligible proposals to recapture the miss-annotated instances from the
background; 2) Morphable Graph Relation Module (MGRM), which models the
adaptation feasibility and transition probability of noisy samples with
relation matrices; 3) Entropy-Aware Gradient Reconcilement (EAGR), which
incorporates the semantic information into the discrimination process and
enforces the gradients provided by noisy and clean samples to be consistent
towards learning domain-invariant representations. A thorough evaluation on
benchmark DAOD datasets with noisy source annotations validates the
effectiveness of NLTE. In particular, NLTE improves the mAP by 8.4\% under 60\%
corrupted annotations and even approaches the ideal upper bound of training on
a clean source dataset.
Related papers
- Online Continual Domain Adaptation for Semantic Image Segmentation Using
Internal Representations [28.549418215123936]
We develop an online UDA algorithm for semantic segmentation of images that improves model generalization on unannotated domains.
We evaluate our approach on well established semantic segmentation datasets and demonstrate it compares favorably against state-of-the-art (SOTA) semantic segmentation methods.
arXiv Detail & Related papers (2024-01-02T04:48:49Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Towards Robust Cross-domain Image Understanding with Unsupervised Noise
Removal [18.21213151403402]
We find that contemporary domain adaptation methods for cross-domain image understanding perform poorly when source domain is noisy.
We propose a novel method, termed Noise Tolerant Domain Adaptation, for Weakly Supervised Domain Adaptation (WSDA)
We conduct extensive experiments to evaluate the effectiveness of our method on both general images and medical images from COVID-19 and e-commerce datasets.
arXiv Detail & Related papers (2021-09-09T14:06:59Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - ANL: Anti-Noise Learning for Cross-Domain Person Re-Identification [25.035093667770052]
We propose an Anti-Noise Learning (ANL) approach, which contains two modules.
FDA module is designed to gather the id-related samples and disperse id-unrelated samples, through the camera-wise contrastive learning and adversarial adaptation.
Reliable Sample Selection ( RSS) module utilizes an Auxiliary Model to correct noisy labels and select reliable samples for the Main Model.
arXiv Detail & Related papers (2020-12-27T02:38:45Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.