Domain Adaptive Object Detection via Balancing Between Self-Training and
Adversarial Learning
- URL: http://arxiv.org/abs/2311.04815v1
- Date: Wed, 8 Nov 2023 16:40:53 GMT
- Title: Domain Adaptive Object Detection via Balancing Between Self-Training and
Adversarial Learning
- Authors: Muhammad Akhtar Munir, Muhammad Haris Khan, M. Saquib Sarfraz, Mohsen
Ali
- Abstract summary: Deep learning based object detectors struggle generalizing to a new target domain bearing significant variations in object and background.
Current methods align domains by using image or instance-level adversarial feature alignment.
We propose to leverage model's predictive uncertainty to strike the right balance between adversarial feature alignment and class-level alignment.
- Score: 19.81071116581342
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning based object detectors struggle generalizing to a new target
domain bearing significant variations in object and background. Most current
methods align domains by using image or instance-level adversarial feature
alignment. This often suffers due to unwanted background and lacks
class-specific alignment. A straightforward approach to promote class-level
alignment is to use high confidence predictions on unlabeled domain as
pseudo-labels. These predictions are often noisy since model is poorly
calibrated under domain shift. In this paper, we propose to leverage model's
predictive uncertainty to strike the right balance between adversarial feature
alignment and class-level alignment. We develop a technique to quantify
predictive uncertainty on class assignments and bounding-box predictions. Model
predictions with low uncertainty are used to generate pseudo-labels for
self-training, whereas the ones with higher uncertainty are used to generate
tiles for adversarial feature alignment. This synergy between tiling around
uncertain object regions and generating pseudo-labels from highly certain
object regions allows capturing both image and instance-level context during
the model adaptation. We report thorough ablation study to reveal the impact of
different components in our approach. Results on five diverse and challenging
adaptation scenarios show that our approach outperforms existing
state-of-the-art methods with noticeable margins.
Related papers
- Cross Domain Object Detection via Multi-Granularity Confidence Alignment based Mean Teacher [14.715398100791559]
Cross domain object detection learns an object detector for an unlabeled target domain by transferring knowledge from an annotated source domain.
In this study, we find that confidence misalignment of the predictions, including category-level overconfidence, instance-level task confidence inconsistency, and image-level confidence misfocusing, will bring suboptimal performance on the target domain.
arXiv Detail & Related papers (2024-07-10T15:56:24Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Predicting Class Distribution Shift for Reliable Domain Adaptive Object
Detection [2.5193191501662144]
Unsupervised Domain Adaptive Object Detection (UDA-OD) uses unlabelled data to improve the reliability of robotic vision systems in open-world environments.
Previous approaches to UDA-OD based on self-training have been effective in overcoming changes in the general appearance of images.
We propose a framework for explicitly addressing class distribution shift to improve pseudo-label reliability in self-training.
arXiv Detail & Related papers (2023-02-13T00:46:34Z) - In and Out-of-Domain Text Adversarial Robustness via Label Smoothing [64.66809713499576]
We study the adversarial robustness provided by various label smoothing strategies in foundational models for diverse NLP tasks.
Our experiments show that label smoothing significantly improves adversarial robustness in pre-trained models like BERT, against various popular attacks.
We also analyze the relationship between prediction confidence and robustness, showing that label smoothing reduces over-confident errors on adversarial examples.
arXiv Detail & Related papers (2022-12-20T14:06:50Z) - Synergizing between Self-Training and Adversarial Learning for Domain
Adaptive Object Detection [11.091890625685298]
We study adapting trained object detectors to unseen domains manifesting significant variations of object appearance, viewpoints and backgrounds.
We propose to leverage model predictive uncertainty to strike the right balance between adversarial feature alignment and class-level alignment.
arXiv Detail & Related papers (2021-10-01T08:10:00Z) - Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection [34.18382705952121]
Unlabelled domain adaptive object detection aims to adapt detectors from a labelled source domain to an unsupervised target domain.
adversarial learning may impair the alignment of well-aligned samples as it merely aligns the global distributions across domains.
We design an uncertainty-aware domain adaptation network (UaDAN) that introduces conditional adversarial learning to align well-aligned and poorly-aligned samples separately.
arXiv Detail & Related papers (2021-02-27T15:04:07Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation [49.295165476818866]
This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation.
Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data.
This paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning.
arXiv Detail & Related papers (2020-03-08T12:37:19Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.