Proposal-Level Unsupervised Domain Adaptation for Open World Unbiased
Detector
- URL: http://arxiv.org/abs/2311.02342v1
- Date: Sat, 4 Nov 2023 07:46:45 GMT
- Title: Proposal-Level Unsupervised Domain Adaptation for Open World Unbiased
Detector
- Authors: Xuanyi Liu, Zhongqi Yue, Xian-Sheng Hua
- Abstract summary: We build an unbiased foreground predictor by re-formulating the task under Unsupervised Domain Adaptation.
We adopt the simple and effective self-training method to learn a predictor based on the domain-invariant foreground features.
Our approach's pipeline can adapt to various detection frameworks and UDA methods, empirically validated by OWOD evaluation.
- Score: 35.334125159092025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open World Object Detection (OWOD) combines open-set object detection with
incremental learning capabilities to handle the challenge of the open and
dynamic visual world. Existing works assume that a foreground predictor trained
on the seen categories can be directly transferred to identify the unseen
categories' locations by selecting the top-k most confident foreground
predictions. However, the assumption is hardly valid in practice. This is
because the predictor is inevitably biased to the known categories, and fails
under the shift in the appearance of the unseen categories. In this work, we
aim to build an unbiased foreground predictor by re-formulating the task under
Unsupervised Domain Adaptation, where the current biased predictor helps form
the domains: the seen object locations and confident background locations as
the source domain, and the rest ambiguous ones as the target domain. Then, we
adopt the simple and effective self-training method to learn a predictor based
on the domain-invariant foreground features, hence achieving unbiased
prediction robust to the shift in appearance between the seen and unseen
categories. Our approach's pipeline can adapt to various detection frameworks
and UDA methods, empirically validated by OWOD evaluation, where we achieve
state-of-the-art performance.
Related papers
- Open Domain Generalization with a Single Network by Regularization
Exploiting Pre-trained Features [37.518025833882334]
Open Domain Generalization (ODG) is a challenging task as it deals with distribution shifts and category shifts.
Previous work has used multiple source-specific networks, which involve a high cost.
This paper proposes a method that can handle ODG using only a single network.
arXiv Detail & Related papers (2023-12-08T16:22:10Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Manifold-Aware Self-Training for Unsupervised Domain Adaptation on
Regressing 6D Object Pose [69.14556386954325]
Domain gap between synthetic and real data in visual regression is bridged in this paper via global feature alignment and local refinement.
Our method incorporates an explicit self-supervised manifold regularization, revealing consistent cumulative target dependency across domains.
Learning unified implicit neural functions to estimate relative direction and distance of targets to their nearest class bins aims to refine target classification predictions.
arXiv Detail & Related papers (2023-05-18T08:42:41Z) - Dirichlet-based Uncertainty Calibration for Active Domain Adaptation [33.33529827699169]
Active domain adaptation (DA) aims to maximally boost the model adaptation on a new target domain by actively selecting limited target data to annotate.
Traditional active learning methods may be less effective since they do not consider the domain shift issue.
We propose a itDirichlet-based Uncertainty (DUC) approach for active DA, which simultaneously achieves the mitigation of miscalibration and the selection of informative target samples.
arXiv Detail & Related papers (2023-02-27T14:33:29Z) - Domain Adaptation with Adversarial Training on Penultimate Activations [82.9977759320565]
Enhancing model prediction confidence on unlabeled target data is an important objective in Unsupervised Domain Adaptation (UDA)
We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features.
arXiv Detail & Related papers (2022-08-26T19:50:46Z) - Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection [11.262560426527818]
Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data.
Recently, mainstream approaches perform this task through adversarial learning, yet still suffer from two limitations.
We propose a joint adaptive detection framework (JADF) to address the above challenges.
arXiv Detail & Related papers (2021-09-19T00:27:08Z) - Adversarial Unsupervised Domain Adaptation Guided with Deep Clustering
for Face Presentation Attack Detection [0.8701566919381223]
Face Presentation Attack Detection (PAD) has drawn increasing attentions to secure the face recognition systems.
We propose an end-to-end learning framework based on Domain Adaptation (DA) to improve PAD generalization capability.
arXiv Detail & Related papers (2021-02-13T05:34:40Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.